2026-03-09 00:00:07.041539 | Job console starting 2026-03-09 00:00:07.074923 | Updating git repos 2026-03-09 00:00:07.211231 | Cloning repos into workspace 2026-03-09 00:00:07.560396 | Restoring repo states 2026-03-09 00:00:07.592080 | Merging changes 2026-03-09 00:00:07.592111 | Checking out repos 2026-03-09 00:00:08.090478 | Preparing playbooks 2026-03-09 00:00:09.017105 | Running Ansible setup 2026-03-09 00:00:16.111724 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-09 00:00:18.650790 | 2026-03-09 00:00:18.651270 | PLAY [Base pre] 2026-03-09 00:00:18.683795 | 2026-03-09 00:00:18.684313 | TASK [Setup log path fact] 2026-03-09 00:00:18.744675 | orchestrator | ok 2026-03-09 00:00:18.844989 | 2026-03-09 00:00:18.845851 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-09 00:00:18.938103 | orchestrator | ok 2026-03-09 00:00:18.973375 | 2026-03-09 00:00:18.973597 | TASK [emit-job-header : Print job information] 2026-03-09 00:00:19.056964 | # Job Information 2026-03-09 00:00:19.057168 | Ansible Version: 2.16.14 2026-03-09 00:00:19.057205 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-09 00:00:19.057241 | Pipeline: periodic-midnight 2026-03-09 00:00:19.057264 | Executor: 521e9411259a 2026-03-09 00:00:19.057284 | Triggered by: https://github.com/osism/testbed 2026-03-09 00:00:19.057306 | Event ID: ba3e5e257f914ab0a0c5d45d3402b562 2026-03-09 00:00:19.064364 | 2026-03-09 00:00:19.064488 | LOOP [emit-job-header : Print node information] 2026-03-09 00:00:19.319395 | orchestrator | ok: 2026-03-09 00:00:19.319652 | orchestrator | # Node Information 2026-03-09 00:00:19.319689 | orchestrator | Inventory Hostname: orchestrator 2026-03-09 00:00:19.319723 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-09 00:00:19.319746 | orchestrator | Username: zuul-testbed06 2026-03-09 00:00:19.319767 | orchestrator | Distro: Debian 12.13 2026-03-09 00:00:19.319790 | orchestrator | Provider: static-testbed 2026-03-09 00:00:19.319811 | orchestrator | Region: 2026-03-09 00:00:19.319832 | orchestrator | Label: testbed-orchestrator 2026-03-09 00:00:19.319852 | orchestrator | Product Name: OpenStack Nova 2026-03-09 00:00:19.319871 | orchestrator | Interface IP: 81.163.193.140 2026-03-09 00:00:19.339825 | 2026-03-09 00:00:19.339943 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-09 00:00:20.952211 | orchestrator -> localhost | changed 2026-03-09 00:00:20.960668 | 2026-03-09 00:00:20.960795 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-09 00:00:23.417762 | orchestrator -> localhost | changed 2026-03-09 00:00:23.435111 | 2026-03-09 00:00:23.435220 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-09 00:00:24.027315 | orchestrator -> localhost | ok 2026-03-09 00:00:24.032891 | 2026-03-09 00:00:24.032989 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-09 00:00:24.092077 | orchestrator | ok 2026-03-09 00:00:24.125014 | orchestrator | included: /var/lib/zuul/builds/c90ab4e2d1c44c968e4ff9157216eb51/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-09 00:00:24.136106 | 2026-03-09 00:00:24.136204 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-09 00:00:27.991673 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-09 00:00:27.992732 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/c90ab4e2d1c44c968e4ff9157216eb51/work/c90ab4e2d1c44c968e4ff9157216eb51_id_rsa 2026-03-09 00:00:27.992810 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/c90ab4e2d1c44c968e4ff9157216eb51/work/c90ab4e2d1c44c968e4ff9157216eb51_id_rsa.pub 2026-03-09 00:00:27.992982 | orchestrator -> localhost | The key fingerprint is: 2026-03-09 00:00:27.993013 | orchestrator -> localhost | SHA256:mGP8c3Ugz63nbHdF5UIZVUL+bz4HPptzFep0hSoJr8g zuul-build-sshkey 2026-03-09 00:00:27.993035 | orchestrator -> localhost | The key's randomart image is: 2026-03-09 00:00:27.993062 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-09 00:00:27.993081 | orchestrator -> localhost | | .+++| 2026-03-09 00:00:27.993100 | orchestrator -> localhost | | .o..| 2026-03-09 00:00:27.993117 | orchestrator -> localhost | | . ...o.| 2026-03-09 00:00:27.993133 | orchestrator -> localhost | | . o. + ooo+| 2026-03-09 00:00:27.993150 | orchestrator -> localhost | | * So .+.+o+| 2026-03-09 00:00:27.993170 | orchestrator -> localhost | | . o +..=..+| 2026-03-09 00:00:27.993186 | orchestrator -> localhost | | . .o...+.o.=| 2026-03-09 00:00:27.993202 | orchestrator -> localhost | | E .o ====| 2026-03-09 00:00:27.993219 | orchestrator -> localhost | | .=*=| 2026-03-09 00:00:27.993236 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-09 00:00:27.993334 | orchestrator -> localhost | ok: Runtime: 0:00:03.033759 2026-03-09 00:00:28.004180 | 2026-03-09 00:00:28.004272 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-09 00:00:28.059740 | orchestrator | ok 2026-03-09 00:00:28.084326 | orchestrator | included: /var/lib/zuul/builds/c90ab4e2d1c44c968e4ff9157216eb51/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-09 00:00:28.103825 | 2026-03-09 00:00:28.103929 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-09 00:00:28.193036 | orchestrator | skipping: Conditional result was False 2026-03-09 00:00:28.202556 | 2026-03-09 00:00:28.202668 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-09 00:00:29.698452 | orchestrator | changed 2026-03-09 00:00:29.704555 | 2026-03-09 00:00:29.704642 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-09 00:00:29.985722 | orchestrator | ok 2026-03-09 00:00:29.998886 | 2026-03-09 00:00:29.998978 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-09 00:00:30.531370 | orchestrator | ok 2026-03-09 00:00:30.536128 | 2026-03-09 00:00:30.536202 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-09 00:00:31.017811 | orchestrator | ok 2026-03-09 00:00:31.022598 | 2026-03-09 00:00:31.022676 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-09 00:00:31.059237 | orchestrator | skipping: Conditional result was False 2026-03-09 00:00:31.064764 | 2026-03-09 00:00:31.064850 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-09 00:00:32.029007 | orchestrator -> localhost | changed 2026-03-09 00:00:32.045589 | 2026-03-09 00:00:32.045691 | TASK [add-build-sshkey : Add back temp key] 2026-03-09 00:00:32.785089 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/c90ab4e2d1c44c968e4ff9157216eb51/work/c90ab4e2d1c44c968e4ff9157216eb51_id_rsa (zuul-build-sshkey) 2026-03-09 00:00:32.785336 | orchestrator -> localhost | ok: Runtime: 0:00:00.023828 2026-03-09 00:00:32.792306 | 2026-03-09 00:00:32.792390 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-09 00:00:33.479642 | orchestrator | ok 2026-03-09 00:00:33.484611 | 2026-03-09 00:00:33.484693 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-09 00:00:33.519309 | orchestrator | skipping: Conditional result was False 2026-03-09 00:00:33.609076 | 2026-03-09 00:00:33.609173 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-09 00:00:33.983683 | orchestrator | ok 2026-03-09 00:00:34.013414 | 2026-03-09 00:00:34.013529 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-09 00:00:34.071025 | orchestrator | ok 2026-03-09 00:00:34.077180 | 2026-03-09 00:00:34.077276 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-09 00:00:34.718224 | orchestrator -> localhost | ok 2026-03-09 00:00:34.724312 | 2026-03-09 00:00:34.724411 | TASK [validate-host : Collect information about the host] 2026-03-09 00:00:36.369407 | orchestrator | ok 2026-03-09 00:00:36.405766 | 2026-03-09 00:00:36.405912 | TASK [validate-host : Sanitize hostname] 2026-03-09 00:00:36.556230 | orchestrator | ok 2026-03-09 00:00:36.563340 | 2026-03-09 00:00:36.563438 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-09 00:00:38.050674 | orchestrator -> localhost | changed 2026-03-09 00:00:38.055781 | 2026-03-09 00:00:38.055867 | TASK [validate-host : Collect information about zuul worker] 2026-03-09 00:00:38.872424 | orchestrator | ok 2026-03-09 00:00:38.876818 | 2026-03-09 00:00:38.876905 | TASK [validate-host : Write out all zuul information for each host] 2026-03-09 00:00:40.103378 | orchestrator -> localhost | changed 2026-03-09 00:00:40.117616 | 2026-03-09 00:00:40.117724 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-09 00:00:40.411351 | orchestrator | ok 2026-03-09 00:00:40.419145 | 2026-03-09 00:00:40.419240 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-09 00:01:58.069481 | orchestrator | changed: 2026-03-09 00:01:58.069807 | orchestrator | .d..t...... src/ 2026-03-09 00:01:58.069860 | orchestrator | .d..t...... src/github.com/ 2026-03-09 00:01:58.069892 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-09 00:01:58.069920 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-09 00:01:58.070098 | orchestrator | RedHat.yml 2026-03-09 00:01:58.087260 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-09 00:01:58.087277 | orchestrator | RedHat.yml 2026-03-09 00:01:58.087328 | orchestrator | = 1.53.0"... 2026-03-09 00:02:09.978585 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-03-09 00:02:09.998790 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-09 00:02:10.183269 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-09 00:02:10.993519 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-09 00:02:11.390321 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-09 00:02:12.068043 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-09 00:02:12.142128 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-09 00:02:12.794002 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-09 00:02:12.794122 | orchestrator | 2026-03-09 00:02:12.794129 | orchestrator | Providers are signed by their developers. 2026-03-09 00:02:12.794134 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-09 00:02:12.794139 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-09 00:02:12.794150 | orchestrator | 2026-03-09 00:02:12.794155 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-09 00:02:12.794159 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-09 00:02:12.794171 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-09 00:02:12.794176 | orchestrator | you run "tofu init" in the future. 2026-03-09 00:02:12.794415 | orchestrator | 2026-03-09 00:02:12.794435 | orchestrator | OpenTofu has been successfully initialized! 2026-03-09 00:02:12.794440 | orchestrator | 2026-03-09 00:02:12.794444 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-09 00:02:12.794449 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-09 00:02:12.794453 | orchestrator | should now work. 2026-03-09 00:02:12.794457 | orchestrator | 2026-03-09 00:02:12.794461 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-09 00:02:12.794465 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-09 00:02:12.794474 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-09 00:02:12.946235 | orchestrator | Created and switched to workspace "ci"! 2026-03-09 00:02:12.946277 | orchestrator | 2026-03-09 00:02:12.946283 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-09 00:02:12.946288 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-09 00:02:12.946293 | orchestrator | for this configuration. 2026-03-09 00:02:13.087331 | orchestrator | ci.auto.tfvars 2026-03-09 00:02:13.140507 | orchestrator | default_custom.tf 2026-03-09 00:02:14.605211 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-09 00:02:15.112639 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-09 00:02:15.340029 | orchestrator | 2026-03-09 00:02:15.340108 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-09 00:02:15.340166 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-09 00:02:15.340179 | orchestrator | + create 2026-03-09 00:02:15.340184 | orchestrator | <= read (data resources) 2026-03-09 00:02:15.340189 | orchestrator | 2026-03-09 00:02:15.340194 | orchestrator | OpenTofu will perform the following actions: 2026-03-09 00:02:15.340336 | orchestrator | 2026-03-09 00:02:15.340352 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-09 00:02:15.340357 | orchestrator | # (config refers to values not yet known) 2026-03-09 00:02:15.340361 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-09 00:02:15.340365 | orchestrator | + checksum = (known after apply) 2026-03-09 00:02:15.340370 | orchestrator | + created_at = (known after apply) 2026-03-09 00:02:15.340374 | orchestrator | + file = (known after apply) 2026-03-09 00:02:15.340378 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.340395 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:15.340399 | orchestrator | + min_disk_gb = (known after apply) 2026-03-09 00:02:15.340403 | orchestrator | + min_ram_mb = (known after apply) 2026-03-09 00:02:15.340407 | orchestrator | + most_recent = true 2026-03-09 00:02:15.340411 | orchestrator | + name = (known after apply) 2026-03-09 00:02:15.340414 | orchestrator | + protected = (known after apply) 2026-03-09 00:02:15.340418 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.340424 | orchestrator | + schema = (known after apply) 2026-03-09 00:02:15.340428 | orchestrator | + size_bytes = (known after apply) 2026-03-09 00:02:15.340432 | orchestrator | + tags = (known after apply) 2026-03-09 00:02:15.340435 | orchestrator | + updated_at = (known after apply) 2026-03-09 00:02:15.340439 | orchestrator | } 2026-03-09 00:02:15.340561 | orchestrator | 2026-03-09 00:02:15.340573 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-09 00:02:15.340578 | orchestrator | # (config refers to values not yet known) 2026-03-09 00:02:15.340582 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-09 00:02:15.340586 | orchestrator | + checksum = (known after apply) 2026-03-09 00:02:15.340590 | orchestrator | + created_at = (known after apply) 2026-03-09 00:02:15.340593 | orchestrator | + file = (known after apply) 2026-03-09 00:02:15.340597 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.340601 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:15.340604 | orchestrator | + min_disk_gb = (known after apply) 2026-03-09 00:02:15.340608 | orchestrator | + min_ram_mb = (known after apply) 2026-03-09 00:02:15.340612 | orchestrator | + most_recent = true 2026-03-09 00:02:15.340616 | orchestrator | + name = (known after apply) 2026-03-09 00:02:15.340619 | orchestrator | + protected = (known after apply) 2026-03-09 00:02:15.340623 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.340627 | orchestrator | + schema = (known after apply) 2026-03-09 00:02:15.340631 | orchestrator | + size_bytes = (known after apply) 2026-03-09 00:02:15.340634 | orchestrator | + tags = (known after apply) 2026-03-09 00:02:15.340638 | orchestrator | + updated_at = (known after apply) 2026-03-09 00:02:15.340642 | orchestrator | } 2026-03-09 00:02:15.340765 | orchestrator | 2026-03-09 00:02:15.340777 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-09 00:02:15.340782 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-09 00:02:15.340786 | orchestrator | + content = (known after apply) 2026-03-09 00:02:15.340791 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-09 00:02:15.340795 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-09 00:02:15.340798 | orchestrator | + content_md5 = (known after apply) 2026-03-09 00:02:15.340802 | orchestrator | + content_sha1 = (known after apply) 2026-03-09 00:02:15.340806 | orchestrator | + content_sha256 = (known after apply) 2026-03-09 00:02:15.340810 | orchestrator | + content_sha512 = (known after apply) 2026-03-09 00:02:15.340814 | orchestrator | + directory_permission = "0777" 2026-03-09 00:02:15.340818 | orchestrator | + file_permission = "0644" 2026-03-09 00:02:15.340822 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-09 00:02:15.340825 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.340829 | orchestrator | } 2026-03-09 00:02:15.340931 | orchestrator | 2026-03-09 00:02:15.340942 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-09 00:02:15.340946 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-09 00:02:15.340951 | orchestrator | + content = (known after apply) 2026-03-09 00:02:15.340954 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-09 00:02:15.340958 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-09 00:02:15.340962 | orchestrator | + content_md5 = (known after apply) 2026-03-09 00:02:15.340966 | orchestrator | + content_sha1 = (known after apply) 2026-03-09 00:02:15.340970 | orchestrator | + content_sha256 = (known after apply) 2026-03-09 00:02:15.340974 | orchestrator | + content_sha512 = (known after apply) 2026-03-09 00:02:15.340977 | orchestrator | + directory_permission = "0777" 2026-03-09 00:02:15.340981 | orchestrator | + file_permission = "0644" 2026-03-09 00:02:15.340990 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-09 00:02:15.340994 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.340997 | orchestrator | } 2026-03-09 00:02:15.341087 | orchestrator | 2026-03-09 00:02:15.341103 | orchestrator | # local_file.inventory will be created 2026-03-09 00:02:15.341107 | orchestrator | + resource "local_file" "inventory" { 2026-03-09 00:02:15.341111 | orchestrator | + content = (known after apply) 2026-03-09 00:02:15.341115 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-09 00:02:15.341119 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-09 00:02:15.341122 | orchestrator | + content_md5 = (known after apply) 2026-03-09 00:02:15.341126 | orchestrator | + content_sha1 = (known after apply) 2026-03-09 00:02:15.341130 | orchestrator | + content_sha256 = (known after apply) 2026-03-09 00:02:15.341134 | orchestrator | + content_sha512 = (known after apply) 2026-03-09 00:02:15.341138 | orchestrator | + directory_permission = "0777" 2026-03-09 00:02:15.341141 | orchestrator | + file_permission = "0644" 2026-03-09 00:02:15.341145 | orchestrator | + filename = "inventory.ci" 2026-03-09 00:02:15.341149 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.341152 | orchestrator | } 2026-03-09 00:02:15.341251 | orchestrator | 2026-03-09 00:02:15.341262 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-09 00:02:15.341267 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-09 00:02:15.341271 | orchestrator | + content = (sensitive value) 2026-03-09 00:02:15.341274 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-09 00:02:15.341278 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-09 00:02:15.341282 | orchestrator | + content_md5 = (known after apply) 2026-03-09 00:02:15.341286 | orchestrator | + content_sha1 = (known after apply) 2026-03-09 00:02:15.341289 | orchestrator | + content_sha256 = (known after apply) 2026-03-09 00:02:15.341293 | orchestrator | + content_sha512 = (known after apply) 2026-03-09 00:02:15.341297 | orchestrator | + directory_permission = "0700" 2026-03-09 00:02:15.341300 | orchestrator | + file_permission = "0600" 2026-03-09 00:02:15.341304 | orchestrator | + filename = ".id_rsa.ci" 2026-03-09 00:02:15.341308 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.341312 | orchestrator | } 2026-03-09 00:02:15.341377 | orchestrator | 2026-03-09 00:02:15.341389 | orchestrator | # null_resource.node_semaphore will be created 2026-03-09 00:02:15.341393 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-09 00:02:15.341397 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.341401 | orchestrator | } 2026-03-09 00:02:15.341507 | orchestrator | 2026-03-09 00:02:15.341518 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-09 00:02:15.341523 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-09 00:02:15.341526 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:15.341530 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:15.341534 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.341538 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:15.341542 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:15.341545 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-09 00:02:15.341549 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.341553 | orchestrator | + size = 80 2026-03-09 00:02:15.341557 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:15.341560 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:15.341564 | orchestrator | } 2026-03-09 00:02:15.341659 | orchestrator | 2026-03-09 00:02:15.341674 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-09 00:02:15.341679 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-09 00:02:15.341683 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:15.341686 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:15.341690 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.341698 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:15.341701 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:15.341705 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-09 00:02:15.341709 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.341713 | orchestrator | + size = 80 2026-03-09 00:02:15.341716 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:15.341720 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:15.341724 | orchestrator | } 2026-03-09 00:02:15.341834 | orchestrator | 2026-03-09 00:02:15.341847 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-09 00:02:15.341851 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-09 00:02:15.341855 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:15.341859 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:15.341862 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.341866 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:15.341870 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:15.341873 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-09 00:02:15.341877 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.341881 | orchestrator | + size = 80 2026-03-09 00:02:15.341884 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:15.341888 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:15.341892 | orchestrator | } 2026-03-09 00:02:15.341981 | orchestrator | 2026-03-09 00:02:15.341993 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-09 00:02:15.341997 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-09 00:02:15.342001 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:15.342005 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:15.342008 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.342031 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:15.342036 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:15.342039 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-09 00:02:15.342043 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.342047 | orchestrator | + size = 80 2026-03-09 00:02:15.342051 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:15.342054 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:15.342058 | orchestrator | } 2026-03-09 00:02:15.342159 | orchestrator | 2026-03-09 00:02:15.342171 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-09 00:02:15.342176 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-09 00:02:15.342180 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:15.342183 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:15.342187 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.342191 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:15.342194 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:15.342201 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-09 00:02:15.342205 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.342208 | orchestrator | + size = 80 2026-03-09 00:02:15.342212 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:15.342216 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:15.342220 | orchestrator | } 2026-03-09 00:02:15.342360 | orchestrator | 2026-03-09 00:02:15.342380 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-09 00:02:15.342384 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-09 00:02:15.342388 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:15.342392 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:15.342396 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.342404 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:15.342408 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:15.342412 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-09 00:02:15.342416 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.342420 | orchestrator | + size = 80 2026-03-09 00:02:15.342423 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:15.342427 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:15.342431 | orchestrator | } 2026-03-09 00:02:15.342522 | orchestrator | 2026-03-09 00:02:15.342536 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-09 00:02:15.342540 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-09 00:02:15.342544 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:15.342548 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:15.342551 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.342555 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:15.342559 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:15.342563 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-09 00:02:15.342566 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.342570 | orchestrator | + size = 80 2026-03-09 00:02:15.342574 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:15.342577 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:15.342581 | orchestrator | } 2026-03-09 00:02:15.342699 | orchestrator | 2026-03-09 00:02:15.342714 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-09 00:02:15.342720 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:15.342724 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:15.342742 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:15.342746 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.342750 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:15.342754 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-09 00:02:15.342758 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.342762 | orchestrator | + size = 20 2026-03-09 00:02:15.342766 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:15.342770 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:15.342773 | orchestrator | } 2026-03-09 00:02:15.342854 | orchestrator | 2026-03-09 00:02:15.342867 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-09 00:02:15.342871 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:15.342875 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:15.342879 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:15.342882 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.342886 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:15.342890 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-09 00:02:15.342894 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.342897 | orchestrator | + size = 20 2026-03-09 00:02:15.342901 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:15.342905 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:15.342908 | orchestrator | } 2026-03-09 00:02:15.343309 | orchestrator | 2026-03-09 00:02:15.343391 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-09 00:02:15.343402 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:15.343409 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:15.343416 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:15.343422 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.343428 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:15.343433 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-09 00:02:15.343439 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.343457 | orchestrator | + size = 20 2026-03-09 00:02:15.343463 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:15.343469 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:15.343475 | orchestrator | } 2026-03-09 00:02:15.343574 | orchestrator | 2026-03-09 00:02:15.343593 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-09 00:02:15.343600 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:15.343606 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:15.343612 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:15.343618 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.343624 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:15.343629 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-09 00:02:15.343635 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.343641 | orchestrator | + size = 20 2026-03-09 00:02:15.343647 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:15.343652 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:15.343658 | orchestrator | } 2026-03-09 00:02:15.343772 | orchestrator | 2026-03-09 00:02:15.343791 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-09 00:02:15.343798 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:15.343804 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:15.343809 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:15.343815 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.343821 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:15.343827 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-09 00:02:15.343833 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.343844 | orchestrator | + size = 20 2026-03-09 00:02:15.343850 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:15.343856 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:15.343861 | orchestrator | } 2026-03-09 00:02:15.343957 | orchestrator | 2026-03-09 00:02:15.343976 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-09 00:02:15.343982 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:15.343988 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:15.343994 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:15.344000 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.344005 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:15.344011 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-09 00:02:15.344017 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.344022 | orchestrator | + size = 20 2026-03-09 00:02:15.344028 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:15.344034 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:15.344039 | orchestrator | } 2026-03-09 00:02:15.344120 | orchestrator | 2026-03-09 00:02:15.344137 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-09 00:02:15.344144 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:15.344150 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:15.344155 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:15.344161 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.344167 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:15.344172 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-09 00:02:15.344178 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.344184 | orchestrator | + size = 20 2026-03-09 00:02:15.344189 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:15.344195 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:15.344201 | orchestrator | } 2026-03-09 00:02:15.344290 | orchestrator | 2026-03-09 00:02:15.344307 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-09 00:02:15.344314 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:15.344325 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:15.344331 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:15.344337 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.344343 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:15.344348 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-09 00:02:15.344354 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.344360 | orchestrator | + size = 20 2026-03-09 00:02:15.344366 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:15.344372 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:15.344377 | orchestrator | } 2026-03-09 00:02:15.344469 | orchestrator | 2026-03-09 00:02:15.344487 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-09 00:02:15.344494 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-09 00:02:15.344500 | orchestrator | + attachment = (known after apply) 2026-03-09 00:02:15.344506 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:15.344511 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.344517 | orchestrator | + metadata = (known after apply) 2026-03-09 00:02:15.344523 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-09 00:02:15.344528 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.344534 | orchestrator | + size = 20 2026-03-09 00:02:15.344540 | orchestrator | + volume_retype_policy = "never" 2026-03-09 00:02:15.344545 | orchestrator | + volume_type = "ssd" 2026-03-09 00:02:15.344551 | orchestrator | } 2026-03-09 00:02:15.344861 | orchestrator | 2026-03-09 00:02:15.344882 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-09 00:02:15.344889 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-09 00:02:15.344895 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-09 00:02:15.344901 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-09 00:02:15.344907 | orchestrator | + all_metadata = (known after apply) 2026-03-09 00:02:15.344912 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:15.344918 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:15.344923 | orchestrator | + config_drive = true 2026-03-09 00:02:15.344929 | orchestrator | + created = (known after apply) 2026-03-09 00:02:15.344935 | orchestrator | + flavor_id = (known after apply) 2026-03-09 00:02:15.344940 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-09 00:02:15.344946 | orchestrator | + force_delete = false 2026-03-09 00:02:15.344951 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-09 00:02:15.344957 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.344963 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:15.344968 | orchestrator | + image_name = (known after apply) 2026-03-09 00:02:15.344974 | orchestrator | + key_pair = "testbed" 2026-03-09 00:02:15.344980 | orchestrator | + name = "testbed-manager" 2026-03-09 00:02:15.344986 | orchestrator | + power_state = "active" 2026-03-09 00:02:15.344991 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.344997 | orchestrator | + security_groups = (known after apply) 2026-03-09 00:02:15.345002 | orchestrator | + stop_before_destroy = false 2026-03-09 00:02:15.345008 | orchestrator | + updated = (known after apply) 2026-03-09 00:02:15.345014 | orchestrator | + user_data = (sensitive value) 2026-03-09 00:02:15.345019 | orchestrator | 2026-03-09 00:02:15.345025 | orchestrator | + block_device { 2026-03-09 00:02:15.345031 | orchestrator | + boot_index = 0 2026-03-09 00:02:15.345037 | orchestrator | + delete_on_termination = false 2026-03-09 00:02:15.345046 | orchestrator | + destination_type = "volume" 2026-03-09 00:02:15.345052 | orchestrator | + multiattach = false 2026-03-09 00:02:15.345058 | orchestrator | + source_type = "volume" 2026-03-09 00:02:15.345063 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:15.345075 | orchestrator | } 2026-03-09 00:02:15.345081 | orchestrator | 2026-03-09 00:02:15.345086 | orchestrator | + network { 2026-03-09 00:02:15.345092 | orchestrator | + access_network = false 2026-03-09 00:02:15.345098 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-09 00:02:15.345103 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-09 00:02:15.345109 | orchestrator | + mac = (known after apply) 2026-03-09 00:02:15.345115 | orchestrator | + name = (known after apply) 2026-03-09 00:02:15.345120 | orchestrator | + port = (known after apply) 2026-03-09 00:02:15.345126 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:15.345132 | orchestrator | } 2026-03-09 00:02:15.345137 | orchestrator | } 2026-03-09 00:02:15.345418 | orchestrator | 2026-03-09 00:02:15.345439 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-09 00:02:15.345445 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-09 00:02:15.345451 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-09 00:02:15.345457 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-09 00:02:15.345462 | orchestrator | + all_metadata = (known after apply) 2026-03-09 00:02:15.345468 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:15.345474 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:15.345479 | orchestrator | + config_drive = true 2026-03-09 00:02:15.345485 | orchestrator | + created = (known after apply) 2026-03-09 00:02:15.345490 | orchestrator | + flavor_id = (known after apply) 2026-03-09 00:02:15.345498 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-09 00:02:15.345507 | orchestrator | + force_delete = false 2026-03-09 00:02:15.345517 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-09 00:02:15.345526 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.345535 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:15.345544 | orchestrator | + image_name = (known after apply) 2026-03-09 00:02:15.345554 | orchestrator | + key_pair = "testbed" 2026-03-09 00:02:15.345564 | orchestrator | + name = "testbed-node-0" 2026-03-09 00:02:15.345573 | orchestrator | + power_state = "active" 2026-03-09 00:02:15.345583 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.345590 | orchestrator | + security_groups = (known after apply) 2026-03-09 00:02:15.345596 | orchestrator | + stop_before_destroy = false 2026-03-09 00:02:15.345602 | orchestrator | + updated = (known after apply) 2026-03-09 00:02:15.345608 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-09 00:02:15.345613 | orchestrator | 2026-03-09 00:02:15.345619 | orchestrator | + block_device { 2026-03-09 00:02:15.345625 | orchestrator | + boot_index = 0 2026-03-09 00:02:15.345631 | orchestrator | + delete_on_termination = false 2026-03-09 00:02:15.345636 | orchestrator | + destination_type = "volume" 2026-03-09 00:02:15.345642 | orchestrator | + multiattach = false 2026-03-09 00:02:15.345648 | orchestrator | + source_type = "volume" 2026-03-09 00:02:15.345653 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:15.345659 | orchestrator | } 2026-03-09 00:02:15.345665 | orchestrator | 2026-03-09 00:02:15.345671 | orchestrator | + network { 2026-03-09 00:02:15.345677 | orchestrator | + access_network = false 2026-03-09 00:02:15.345682 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-09 00:02:15.345688 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-09 00:02:15.345694 | orchestrator | + mac = (known after apply) 2026-03-09 00:02:15.345700 | orchestrator | + name = (known after apply) 2026-03-09 00:02:15.345705 | orchestrator | + port = (known after apply) 2026-03-09 00:02:15.345711 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:15.345717 | orchestrator | } 2026-03-09 00:02:15.345722 | orchestrator | } 2026-03-09 00:02:15.346068 | orchestrator | 2026-03-09 00:02:15.346110 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-09 00:02:15.346118 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-09 00:02:15.346124 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-09 00:02:15.346137 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-09 00:02:15.346143 | orchestrator | + all_metadata = (known after apply) 2026-03-09 00:02:15.346149 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:15.346154 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:15.346160 | orchestrator | + config_drive = true 2026-03-09 00:02:15.346166 | orchestrator | + created = (known after apply) 2026-03-09 00:02:15.346171 | orchestrator | + flavor_id = (known after apply) 2026-03-09 00:02:15.346177 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-09 00:02:15.346183 | orchestrator | + force_delete = false 2026-03-09 00:02:15.346188 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-09 00:02:15.346194 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.346200 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:15.346206 | orchestrator | + image_name = (known after apply) 2026-03-09 00:02:15.346211 | orchestrator | + key_pair = "testbed" 2026-03-09 00:02:15.346217 | orchestrator | + name = "testbed-node-1" 2026-03-09 00:02:15.346223 | orchestrator | + power_state = "active" 2026-03-09 00:02:15.346228 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.346234 | orchestrator | + security_groups = (known after apply) 2026-03-09 00:02:15.346240 | orchestrator | + stop_before_destroy = false 2026-03-09 00:02:15.346245 | orchestrator | + updated = (known after apply) 2026-03-09 00:02:15.346252 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-09 00:02:15.346258 | orchestrator | 2026-03-09 00:02:15.346263 | orchestrator | + block_device { 2026-03-09 00:02:15.346269 | orchestrator | + boot_index = 0 2026-03-09 00:02:15.346275 | orchestrator | + delete_on_termination = false 2026-03-09 00:02:15.346281 | orchestrator | + destination_type = "volume" 2026-03-09 00:02:15.346286 | orchestrator | + multiattach = false 2026-03-09 00:02:15.346292 | orchestrator | + source_type = "volume" 2026-03-09 00:02:15.346298 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:15.346303 | orchestrator | } 2026-03-09 00:02:15.346309 | orchestrator | 2026-03-09 00:02:15.346315 | orchestrator | + network { 2026-03-09 00:02:15.346320 | orchestrator | + access_network = false 2026-03-09 00:02:15.346326 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-09 00:02:15.346332 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-09 00:02:15.346337 | orchestrator | + mac = (known after apply) 2026-03-09 00:02:15.346343 | orchestrator | + name = (known after apply) 2026-03-09 00:02:15.346349 | orchestrator | + port = (known after apply) 2026-03-09 00:02:15.346355 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:15.346361 | orchestrator | } 2026-03-09 00:02:15.346366 | orchestrator | } 2026-03-09 00:02:15.346638 | orchestrator | 2026-03-09 00:02:15.346659 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-09 00:02:15.346666 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-09 00:02:15.346672 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-09 00:02:15.346678 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-09 00:02:15.346687 | orchestrator | + all_metadata = (known after apply) 2026-03-09 00:02:15.346693 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:15.346703 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:15.346711 | orchestrator | + config_drive = true 2026-03-09 00:02:15.346720 | orchestrator | + created = (known after apply) 2026-03-09 00:02:15.346762 | orchestrator | + flavor_id = (known after apply) 2026-03-09 00:02:15.346772 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-09 00:02:15.346782 | orchestrator | + force_delete = false 2026-03-09 00:02:15.346788 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-09 00:02:15.346794 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.346800 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:15.346812 | orchestrator | + image_name = (known after apply) 2026-03-09 00:02:15.346818 | orchestrator | + key_pair = "testbed" 2026-03-09 00:02:15.346823 | orchestrator | + name = "testbed-node-2" 2026-03-09 00:02:15.346829 | orchestrator | + power_state = "active" 2026-03-09 00:02:15.346835 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.346840 | orchestrator | + security_groups = (known after apply) 2026-03-09 00:02:15.346846 | orchestrator | + stop_before_destroy = false 2026-03-09 00:02:15.346852 | orchestrator | + updated = (known after apply) 2026-03-09 00:02:15.346857 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-09 00:02:15.346863 | orchestrator | 2026-03-09 00:02:15.346869 | orchestrator | + block_device { 2026-03-09 00:02:15.346875 | orchestrator | + boot_index = 0 2026-03-09 00:02:15.346880 | orchestrator | + delete_on_termination = false 2026-03-09 00:02:15.346886 | orchestrator | + destination_type = "volume" 2026-03-09 00:02:15.346892 | orchestrator | + multiattach = false 2026-03-09 00:02:15.346897 | orchestrator | + source_type = "volume" 2026-03-09 00:02:15.346905 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:15.346915 | orchestrator | } 2026-03-09 00:02:15.346924 | orchestrator | 2026-03-09 00:02:15.346933 | orchestrator | + network { 2026-03-09 00:02:15.346942 | orchestrator | + access_network = false 2026-03-09 00:02:15.346952 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-09 00:02:15.346962 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-09 00:02:15.346972 | orchestrator | + mac = (known after apply) 2026-03-09 00:02:15.346978 | orchestrator | + name = (known after apply) 2026-03-09 00:02:15.346984 | orchestrator | + port = (known after apply) 2026-03-09 00:02:15.346990 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:15.346996 | orchestrator | } 2026-03-09 00:02:15.347002 | orchestrator | } 2026-03-09 00:02:15.347282 | orchestrator | 2026-03-09 00:02:15.347301 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-09 00:02:15.347308 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-09 00:02:15.347314 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-09 00:02:15.347320 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-09 00:02:15.347325 | orchestrator | + all_metadata = (known after apply) 2026-03-09 00:02:15.347332 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:15.347337 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:15.347343 | orchestrator | + config_drive = true 2026-03-09 00:02:15.347349 | orchestrator | + created = (known after apply) 2026-03-09 00:02:15.347355 | orchestrator | + flavor_id = (known after apply) 2026-03-09 00:02:15.347360 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-09 00:02:15.347366 | orchestrator | + force_delete = false 2026-03-09 00:02:15.347372 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-09 00:02:15.347377 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.347383 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:15.347389 | orchestrator | + image_name = (known after apply) 2026-03-09 00:02:15.347395 | orchestrator | + key_pair = "testbed" 2026-03-09 00:02:15.347401 | orchestrator | + name = "testbed-node-3" 2026-03-09 00:02:15.347406 | orchestrator | + power_state = "active" 2026-03-09 00:02:15.347412 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.347418 | orchestrator | + security_groups = (known after apply) 2026-03-09 00:02:15.347423 | orchestrator | + stop_before_destroy = false 2026-03-09 00:02:15.347429 | orchestrator | + updated = (known after apply) 2026-03-09 00:02:15.347435 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-09 00:02:15.347441 | orchestrator | 2026-03-09 00:02:15.347446 | orchestrator | + block_device { 2026-03-09 00:02:15.347457 | orchestrator | + boot_index = 0 2026-03-09 00:02:15.347463 | orchestrator | + delete_on_termination = false 2026-03-09 00:02:15.347469 | orchestrator | + destination_type = "volume" 2026-03-09 00:02:15.347480 | orchestrator | + multiattach = false 2026-03-09 00:02:15.347486 | orchestrator | + source_type = "volume" 2026-03-09 00:02:15.347492 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:15.347498 | orchestrator | } 2026-03-09 00:02:15.347504 | orchestrator | 2026-03-09 00:02:15.347509 | orchestrator | + network { 2026-03-09 00:02:15.347515 | orchestrator | + access_network = false 2026-03-09 00:02:15.347521 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-09 00:02:15.347526 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-09 00:02:15.347532 | orchestrator | + mac = (known after apply) 2026-03-09 00:02:15.347538 | orchestrator | + name = (known after apply) 2026-03-09 00:02:15.347543 | orchestrator | + port = (known after apply) 2026-03-09 00:02:15.347549 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:15.347555 | orchestrator | } 2026-03-09 00:02:15.347561 | orchestrator | } 2026-03-09 00:02:15.347844 | orchestrator | 2026-03-09 00:02:15.347866 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-09 00:02:15.347873 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-09 00:02:15.347880 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-09 00:02:15.347885 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-09 00:02:15.347891 | orchestrator | + all_metadata = (known after apply) 2026-03-09 00:02:15.347897 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:15.347903 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:15.347908 | orchestrator | + config_drive = true 2026-03-09 00:02:15.347914 | orchestrator | + created = (known after apply) 2026-03-09 00:02:15.347919 | orchestrator | + flavor_id = (known after apply) 2026-03-09 00:02:15.347925 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-09 00:02:15.347931 | orchestrator | + force_delete = false 2026-03-09 00:02:15.347937 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-09 00:02:15.347943 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.347949 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:15.347954 | orchestrator | + image_name = (known after apply) 2026-03-09 00:02:15.347960 | orchestrator | + key_pair = "testbed" 2026-03-09 00:02:15.347965 | orchestrator | + name = "testbed-node-4" 2026-03-09 00:02:15.347971 | orchestrator | + power_state = "active" 2026-03-09 00:02:15.347977 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.347982 | orchestrator | + security_groups = (known after apply) 2026-03-09 00:02:15.347988 | orchestrator | + stop_before_destroy = false 2026-03-09 00:02:15.347994 | orchestrator | + updated = (known after apply) 2026-03-09 00:02:15.347999 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-09 00:02:15.348005 | orchestrator | 2026-03-09 00:02:15.348011 | orchestrator | + block_device { 2026-03-09 00:02:15.348017 | orchestrator | + boot_index = 0 2026-03-09 00:02:15.348023 | orchestrator | + delete_on_termination = false 2026-03-09 00:02:15.348028 | orchestrator | + destination_type = "volume" 2026-03-09 00:02:15.348034 | orchestrator | + multiattach = false 2026-03-09 00:02:15.348040 | orchestrator | + source_type = "volume" 2026-03-09 00:02:15.348046 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:15.348051 | orchestrator | } 2026-03-09 00:02:15.348057 | orchestrator | 2026-03-09 00:02:15.348063 | orchestrator | + network { 2026-03-09 00:02:15.348069 | orchestrator | + access_network = false 2026-03-09 00:02:15.348074 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-09 00:02:15.348080 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-09 00:02:15.348086 | orchestrator | + mac = (known after apply) 2026-03-09 00:02:15.348091 | orchestrator | + name = (known after apply) 2026-03-09 00:02:15.348097 | orchestrator | + port = (known after apply) 2026-03-09 00:02:15.348103 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:15.348109 | orchestrator | } 2026-03-09 00:02:15.348115 | orchestrator | } 2026-03-09 00:02:15.348383 | orchestrator | 2026-03-09 00:02:15.348403 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-09 00:02:15.348410 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-09 00:02:15.348416 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-09 00:02:15.348422 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-09 00:02:15.348427 | orchestrator | + all_metadata = (known after apply) 2026-03-09 00:02:15.348433 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:15.348439 | orchestrator | + availability_zone = "nova" 2026-03-09 00:02:15.348445 | orchestrator | + config_drive = true 2026-03-09 00:02:15.348451 | orchestrator | + created = (known after apply) 2026-03-09 00:02:15.348457 | orchestrator | + flavor_id = (known after apply) 2026-03-09 00:02:15.348463 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-09 00:02:15.348468 | orchestrator | + force_delete = false 2026-03-09 00:02:15.348478 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-09 00:02:15.348484 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.348490 | orchestrator | + image_id = (known after apply) 2026-03-09 00:02:15.348495 | orchestrator | + image_name = (known after apply) 2026-03-09 00:02:15.348501 | orchestrator | + key_pair = "testbed" 2026-03-09 00:02:15.348507 | orchestrator | + name = "testbed-node-5" 2026-03-09 00:02:15.348512 | orchestrator | + power_state = "active" 2026-03-09 00:02:15.348518 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.348523 | orchestrator | + security_groups = (known after apply) 2026-03-09 00:02:15.348529 | orchestrator | + stop_before_destroy = false 2026-03-09 00:02:15.348535 | orchestrator | + updated = (known after apply) 2026-03-09 00:02:15.348541 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-09 00:02:15.348546 | orchestrator | 2026-03-09 00:02:15.348552 | orchestrator | + block_device { 2026-03-09 00:02:15.348557 | orchestrator | + boot_index = 0 2026-03-09 00:02:15.348563 | orchestrator | + delete_on_termination = false 2026-03-09 00:02:15.348569 | orchestrator | + destination_type = "volume" 2026-03-09 00:02:15.348574 | orchestrator | + multiattach = false 2026-03-09 00:02:15.348580 | orchestrator | + source_type = "volume" 2026-03-09 00:02:15.348586 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:15.348592 | orchestrator | } 2026-03-09 00:02:15.348598 | orchestrator | 2026-03-09 00:02:15.348603 | orchestrator | + network { 2026-03-09 00:02:15.348609 | orchestrator | + access_network = false 2026-03-09 00:02:15.348615 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-09 00:02:15.348621 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-09 00:02:15.348626 | orchestrator | + mac = (known after apply) 2026-03-09 00:02:15.348632 | orchestrator | + name = (known after apply) 2026-03-09 00:02:15.348638 | orchestrator | + port = (known after apply) 2026-03-09 00:02:15.348643 | orchestrator | + uuid = (known after apply) 2026-03-09 00:02:15.348649 | orchestrator | } 2026-03-09 00:02:15.348655 | orchestrator | } 2026-03-09 00:02:15.348724 | orchestrator | 2026-03-09 00:02:15.348757 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-09 00:02:15.348764 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-09 00:02:15.348770 | orchestrator | + fingerprint = (known after apply) 2026-03-09 00:02:15.348775 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.348781 | orchestrator | + name = "testbed" 2026-03-09 00:02:15.348787 | orchestrator | + private_key = (sensitive value) 2026-03-09 00:02:15.348793 | orchestrator | + public_key = (known after apply) 2026-03-09 00:02:15.348798 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.348804 | orchestrator | + user_id = (known after apply) 2026-03-09 00:02:15.348810 | orchestrator | } 2026-03-09 00:02:15.348866 | orchestrator | 2026-03-09 00:02:15.348882 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-09 00:02:15.348889 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:15.348901 | orchestrator | + device = (known after apply) 2026-03-09 00:02:15.348907 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.348913 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:15.348918 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.348924 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:15.348930 | orchestrator | } 2026-03-09 00:02:15.348984 | orchestrator | 2026-03-09 00:02:15.349000 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-09 00:02:15.349007 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:15.349013 | orchestrator | + device = (known after apply) 2026-03-09 00:02:15.349019 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.349025 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:15.349031 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.349036 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:15.349042 | orchestrator | } 2026-03-09 00:02:15.349095 | orchestrator | 2026-03-09 00:02:15.349112 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-09 00:02:15.349118 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:15.349124 | orchestrator | + device = (known after apply) 2026-03-09 00:02:15.349130 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.349136 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:15.349141 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.349147 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:15.349152 | orchestrator | } 2026-03-09 00:02:15.349205 | orchestrator | 2026-03-09 00:02:15.349222 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-09 00:02:15.349229 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:15.349235 | orchestrator | + device = (known after apply) 2026-03-09 00:02:15.349240 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.349246 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:15.349252 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.349258 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:15.349263 | orchestrator | } 2026-03-09 00:02:15.349312 | orchestrator | 2026-03-09 00:02:15.349328 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-09 00:02:15.349334 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:15.349340 | orchestrator | + device = (known after apply) 2026-03-09 00:02:15.349346 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.349352 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:15.349362 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.349368 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:15.349374 | orchestrator | } 2026-03-09 00:02:15.349425 | orchestrator | 2026-03-09 00:02:15.349441 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-09 00:02:15.349448 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:15.349454 | orchestrator | + device = (known after apply) 2026-03-09 00:02:15.349460 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.349465 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:15.349471 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.349477 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:15.349483 | orchestrator | } 2026-03-09 00:02:15.349533 | orchestrator | 2026-03-09 00:02:15.349549 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-09 00:02:15.349556 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:15.349562 | orchestrator | + device = (known after apply) 2026-03-09 00:02:15.349568 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.349573 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:15.349579 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.349589 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:15.349595 | orchestrator | } 2026-03-09 00:02:15.349651 | orchestrator | 2026-03-09 00:02:15.349667 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-09 00:02:15.349674 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:15.349679 | orchestrator | + device = (known after apply) 2026-03-09 00:02:15.349685 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.349691 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:15.349697 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.349702 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:15.349708 | orchestrator | } 2026-03-09 00:02:15.349774 | orchestrator | 2026-03-09 00:02:15.349791 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-09 00:02:15.349798 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-09 00:02:15.349803 | orchestrator | + device = (known after apply) 2026-03-09 00:02:15.349809 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.349815 | orchestrator | + instance_id = (known after apply) 2026-03-09 00:02:15.349820 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.349826 | orchestrator | + volume_id = (known after apply) 2026-03-09 00:02:15.349832 | orchestrator | } 2026-03-09 00:02:15.349882 | orchestrator | 2026-03-09 00:02:15.349899 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-09 00:02:15.349906 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-09 00:02:15.349912 | orchestrator | + fixed_ip = (known after apply) 2026-03-09 00:02:15.349918 | orchestrator | + floating_ip = (known after apply) 2026-03-09 00:02:15.349924 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.349929 | orchestrator | + port_id = (known after apply) 2026-03-09 00:02:15.349935 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.349941 | orchestrator | } 2026-03-09 00:02:15.350052 | orchestrator | 2026-03-09 00:02:15.350072 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-09 00:02:15.350079 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-09 00:02:15.350085 | orchestrator | + address = (known after apply) 2026-03-09 00:02:15.350090 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:15.350096 | orchestrator | + dns_domain = (known after apply) 2026-03-09 00:02:15.350102 | orchestrator | + dns_name = (known after apply) 2026-03-09 00:02:15.350108 | orchestrator | + fixed_ip = (known after apply) 2026-03-09 00:02:15.350113 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.350119 | orchestrator | + pool = "public" 2026-03-09 00:02:15.350125 | orchestrator | + port_id = (known after apply) 2026-03-09 00:02:15.350131 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.350137 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:15.350142 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:15.350148 | orchestrator | } 2026-03-09 00:02:15.350272 | orchestrator | 2026-03-09 00:02:15.350290 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-09 00:02:15.350297 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-09 00:02:15.350303 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:15.350309 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:15.350315 | orchestrator | + availability_zone_hints = [ 2026-03-09 00:02:15.350320 | orchestrator | + "nova", 2026-03-09 00:02:15.350326 | orchestrator | ] 2026-03-09 00:02:15.350332 | orchestrator | + dns_domain = (known after apply) 2026-03-09 00:02:15.350338 | orchestrator | + external = (known after apply) 2026-03-09 00:02:15.350344 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.350349 | orchestrator | + mtu = (known after apply) 2026-03-09 00:02:15.350355 | orchestrator | + name = "net-testbed-management" 2026-03-09 00:02:15.350361 | orchestrator | + port_security_enabled = (known after apply) 2026-03-09 00:02:15.350372 | orchestrator | + qos_policy_id = (known after apply) 2026-03-09 00:02:15.350378 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.350383 | orchestrator | + shared = (known after apply) 2026-03-09 00:02:15.350389 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:15.350395 | orchestrator | + transparent_vlan = (known after apply) 2026-03-09 00:02:15.350400 | orchestrator | 2026-03-09 00:02:15.350406 | orchestrator | + segments (known after apply) 2026-03-09 00:02:15.350412 | orchestrator | } 2026-03-09 00:02:15.350579 | orchestrator | 2026-03-09 00:02:15.350598 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-09 00:02:15.350604 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-09 00:02:15.350610 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:15.350616 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-09 00:02:15.350622 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-09 00:02:15.350632 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:15.350638 | orchestrator | + device_id = (known after apply) 2026-03-09 00:02:15.350644 | orchestrator | + device_owner = (known after apply) 2026-03-09 00:02:15.350650 | orchestrator | + dns_assignment = (known after apply) 2026-03-09 00:02:15.350655 | orchestrator | + dns_name = (known after apply) 2026-03-09 00:02:15.350661 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.350667 | orchestrator | + mac_address = (known after apply) 2026-03-09 00:02:15.350673 | orchestrator | + network_id = (known after apply) 2026-03-09 00:02:15.350679 | orchestrator | + port_security_enabled = (known after apply) 2026-03-09 00:02:15.350684 | orchestrator | + qos_policy_id = (known after apply) 2026-03-09 00:02:15.350690 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.350696 | orchestrator | + security_group_ids = (known after apply) 2026-03-09 00:02:15.350701 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:15.350707 | orchestrator | 2026-03-09 00:02:15.350713 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:15.350719 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-09 00:02:15.350724 | orchestrator | } 2026-03-09 00:02:15.350775 | orchestrator | 2026-03-09 00:02:15.350781 | orchestrator | + binding (known after apply) 2026-03-09 00:02:15.350787 | orchestrator | 2026-03-09 00:02:15.350793 | orchestrator | + fixed_ip { 2026-03-09 00:02:15.350799 | orchestrator | + ip_address = "192.168.16.5" 2026-03-09 00:02:15.350805 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:15.350810 | orchestrator | } 2026-03-09 00:02:15.350816 | orchestrator | } 2026-03-09 00:02:15.351001 | orchestrator | 2026-03-09 00:02:15.351019 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-09 00:02:15.351025 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-09 00:02:15.351030 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:15.351035 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-09 00:02:15.351040 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-09 00:02:15.351045 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:15.351050 | orchestrator | + device_id = (known after apply) 2026-03-09 00:02:15.351056 | orchestrator | + device_owner = (known after apply) 2026-03-09 00:02:15.351061 | orchestrator | + dns_assignment = (known after apply) 2026-03-09 00:02:15.351066 | orchestrator | + dns_name = (known after apply) 2026-03-09 00:02:15.351071 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.351076 | orchestrator | + mac_address = (known after apply) 2026-03-09 00:02:15.351081 | orchestrator | + network_id = (known after apply) 2026-03-09 00:02:15.351086 | orchestrator | + port_security_enabled = (known after apply) 2026-03-09 00:02:15.351091 | orchestrator | + qos_policy_id = (known after apply) 2026-03-09 00:02:15.351096 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.351106 | orchestrator | + security_group_ids = (known after apply) 2026-03-09 00:02:15.351111 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:15.351116 | orchestrator | 2026-03-09 00:02:15.351122 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:15.351126 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-09 00:02:15.351132 | orchestrator | } 2026-03-09 00:02:15.351137 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:15.351142 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-09 00:02:15.351147 | orchestrator | } 2026-03-09 00:02:15.351152 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:15.351157 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-09 00:02:15.351162 | orchestrator | } 2026-03-09 00:02:15.351167 | orchestrator | 2026-03-09 00:02:15.351172 | orchestrator | + binding (known after apply) 2026-03-09 00:02:15.351177 | orchestrator | 2026-03-09 00:02:15.351182 | orchestrator | + fixed_ip { 2026-03-09 00:02:15.351187 | orchestrator | + ip_address = "192.168.16.10" 2026-03-09 00:02:15.351192 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:15.351197 | orchestrator | } 2026-03-09 00:02:15.351202 | orchestrator | } 2026-03-09 00:02:15.351377 | orchestrator | 2026-03-09 00:02:15.351395 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-09 00:02:15.351400 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-09 00:02:15.351405 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:15.351411 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-09 00:02:15.351416 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-09 00:02:15.351421 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:15.351426 | orchestrator | + device_id = (known after apply) 2026-03-09 00:02:15.351431 | orchestrator | + device_owner = (known after apply) 2026-03-09 00:02:15.351436 | orchestrator | + dns_assignment = (known after apply) 2026-03-09 00:02:15.351441 | orchestrator | + dns_name = (known after apply) 2026-03-09 00:02:15.351446 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.351451 | orchestrator | + mac_address = (known after apply) 2026-03-09 00:02:15.351456 | orchestrator | + network_id = (known after apply) 2026-03-09 00:02:15.351461 | orchestrator | + port_security_enabled = (known after apply) 2026-03-09 00:02:15.351466 | orchestrator | + qos_policy_id = (known after apply) 2026-03-09 00:02:15.351471 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.351476 | orchestrator | + security_group_ids = (known after apply) 2026-03-09 00:02:15.351481 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:15.351486 | orchestrator | 2026-03-09 00:02:15.351491 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:15.351496 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-09 00:02:15.351501 | orchestrator | } 2026-03-09 00:02:15.351506 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:15.351511 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-09 00:02:15.351517 | orchestrator | } 2026-03-09 00:02:15.351522 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:15.351527 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-09 00:02:15.351532 | orchestrator | } 2026-03-09 00:02:15.351537 | orchestrator | 2026-03-09 00:02:15.351542 | orchestrator | + binding (known after apply) 2026-03-09 00:02:15.351547 | orchestrator | 2026-03-09 00:02:15.351552 | orchestrator | + fixed_ip { 2026-03-09 00:02:15.351557 | orchestrator | + ip_address = "192.168.16.11" 2026-03-09 00:02:15.351562 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:15.351567 | orchestrator | } 2026-03-09 00:02:15.351572 | orchestrator | } 2026-03-09 00:02:15.351777 | orchestrator | 2026-03-09 00:02:15.351795 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-09 00:02:15.351801 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-09 00:02:15.351806 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:15.351811 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-09 00:02:15.351816 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-09 00:02:15.351821 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:15.351832 | orchestrator | + device_id = (known after apply) 2026-03-09 00:02:15.351838 | orchestrator | + device_owner = (known after apply) 2026-03-09 00:02:15.351843 | orchestrator | + dns_assignment = (known after apply) 2026-03-09 00:02:15.351848 | orchestrator | + dns_name = (known after apply) 2026-03-09 00:02:15.351856 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.351862 | orchestrator | + mac_address = (known after apply) 2026-03-09 00:02:15.351867 | orchestrator | + network_id = (known after apply) 2026-03-09 00:02:15.351872 | orchestrator | + port_security_enabled = (known after apply) 2026-03-09 00:02:15.351877 | orchestrator | + qos_policy_id = (known after apply) 2026-03-09 00:02:15.351882 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.351887 | orchestrator | + security_group_ids = (known after apply) 2026-03-09 00:02:15.351892 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:15.351897 | orchestrator | 2026-03-09 00:02:15.351902 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:15.351907 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-09 00:02:15.351912 | orchestrator | } 2026-03-09 00:02:15.351917 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:15.351922 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-09 00:02:15.351927 | orchestrator | } 2026-03-09 00:02:15.351932 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:15.351937 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-09 00:02:15.351942 | orchestrator | } 2026-03-09 00:02:15.351948 | orchestrator | 2026-03-09 00:02:15.351953 | orchestrator | + binding (known after apply) 2026-03-09 00:02:15.351958 | orchestrator | 2026-03-09 00:02:15.351963 | orchestrator | + fixed_ip { 2026-03-09 00:02:15.351968 | orchestrator | + ip_address = "192.168.16.12" 2026-03-09 00:02:15.351973 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:15.351978 | orchestrator | } 2026-03-09 00:02:15.351983 | orchestrator | } 2026-03-09 00:02:15.352153 | orchestrator | 2026-03-09 00:02:15.352168 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-09 00:02:15.352174 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-09 00:02:15.352180 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:15.352185 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-09 00:02:15.352190 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-09 00:02:15.352195 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:15.352200 | orchestrator | + device_id = (known after apply) 2026-03-09 00:02:15.352205 | orchestrator | + device_owner = (known after apply) 2026-03-09 00:02:15.352210 | orchestrator | + dns_assignment = (known after apply) 2026-03-09 00:02:15.352215 | orchestrator | + dns_name = (known after apply) 2026-03-09 00:02:15.352220 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.352225 | orchestrator | + mac_address = (known after apply) 2026-03-09 00:02:15.352231 | orchestrator | + network_id = (known after apply) 2026-03-09 00:02:15.352236 | orchestrator | + port_security_enabled = (known after apply) 2026-03-09 00:02:15.352241 | orchestrator | + qos_policy_id = (known after apply) 2026-03-09 00:02:15.352246 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.352251 | orchestrator | + security_group_ids = (known after apply) 2026-03-09 00:02:15.352256 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:15.352261 | orchestrator | 2026-03-09 00:02:15.352267 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:15.352272 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-09 00:02:15.352277 | orchestrator | } 2026-03-09 00:02:15.352282 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:15.352287 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-09 00:02:15.352292 | orchestrator | } 2026-03-09 00:02:15.352297 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:15.352302 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-09 00:02:15.352307 | orchestrator | } 2026-03-09 00:02:15.352312 | orchestrator | 2026-03-09 00:02:15.352321 | orchestrator | + binding (known after apply) 2026-03-09 00:02:15.352326 | orchestrator | 2026-03-09 00:02:15.352331 | orchestrator | + fixed_ip { 2026-03-09 00:02:15.352336 | orchestrator | + ip_address = "192.168.16.13" 2026-03-09 00:02:15.352341 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:15.352346 | orchestrator | } 2026-03-09 00:02:15.352351 | orchestrator | } 2026-03-09 00:02:15.352518 | orchestrator | 2026-03-09 00:02:15.352533 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-09 00:02:15.352539 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-09 00:02:15.352544 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:15.352549 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-09 00:02:15.352554 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-09 00:02:15.352559 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:15.352565 | orchestrator | + device_id = (known after apply) 2026-03-09 00:02:15.352570 | orchestrator | + device_owner = (known after apply) 2026-03-09 00:02:15.352575 | orchestrator | + dns_assignment = (known after apply) 2026-03-09 00:02:15.352580 | orchestrator | + dns_name = (known after apply) 2026-03-09 00:02:15.352585 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.352590 | orchestrator | + mac_address = (known after apply) 2026-03-09 00:02:15.352595 | orchestrator | + network_id = (known after apply) 2026-03-09 00:02:15.352600 | orchestrator | + port_security_enabled = (known after apply) 2026-03-09 00:02:15.352605 | orchestrator | + qos_policy_id = (known after apply) 2026-03-09 00:02:15.352610 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.352615 | orchestrator | + security_group_ids = (known after apply) 2026-03-09 00:02:15.352620 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:15.352627 | orchestrator | 2026-03-09 00:02:15.352632 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:15.352637 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-09 00:02:15.352642 | orchestrator | } 2026-03-09 00:02:15.352647 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:15.352652 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-09 00:02:15.352657 | orchestrator | } 2026-03-09 00:02:15.352662 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:15.352667 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-09 00:02:15.352672 | orchestrator | } 2026-03-09 00:02:15.352677 | orchestrator | 2026-03-09 00:02:15.352682 | orchestrator | + binding (known after apply) 2026-03-09 00:02:15.352687 | orchestrator | 2026-03-09 00:02:15.352692 | orchestrator | + fixed_ip { 2026-03-09 00:02:15.352698 | orchestrator | + ip_address = "192.168.16.14" 2026-03-09 00:02:15.352703 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:15.352708 | orchestrator | } 2026-03-09 00:02:15.352713 | orchestrator | } 2026-03-09 00:02:15.352926 | orchestrator | 2026-03-09 00:02:15.352943 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-09 00:02:15.352948 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-09 00:02:15.352953 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:15.352958 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-09 00:02:15.352963 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-09 00:02:15.352968 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:15.352973 | orchestrator | + device_id = (known after apply) 2026-03-09 00:02:15.352978 | orchestrator | + device_owner = (known after apply) 2026-03-09 00:02:15.352983 | orchestrator | + dns_assignment = (known after apply) 2026-03-09 00:02:15.352988 | orchestrator | + dns_name = (known after apply) 2026-03-09 00:02:15.352993 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.352998 | orchestrator | + mac_address = (known after apply) 2026-03-09 00:02:15.353002 | orchestrator | + network_id = (known after apply) 2026-03-09 00:02:15.353007 | orchestrator | + port_security_enabled = (known after apply) 2026-03-09 00:02:15.353012 | orchestrator | + qos_policy_id = (known after apply) 2026-03-09 00:02:15.353021 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.353026 | orchestrator | + security_group_ids = (known after apply) 2026-03-09 00:02:15.353031 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:15.353036 | orchestrator | 2026-03-09 00:02:15.353041 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:15.353046 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-09 00:02:15.353050 | orchestrator | } 2026-03-09 00:02:15.353055 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:15.353060 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-09 00:02:15.353065 | orchestrator | } 2026-03-09 00:02:15.353070 | orchestrator | + allowed_address_pairs { 2026-03-09 00:02:15.353075 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-09 00:02:15.353079 | orchestrator | } 2026-03-09 00:02:15.353084 | orchestrator | 2026-03-09 00:02:15.353093 | orchestrator | + binding (known after apply) 2026-03-09 00:02:15.353098 | orchestrator | 2026-03-09 00:02:15.353103 | orchestrator | + fixed_ip { 2026-03-09 00:02:15.353108 | orchestrator | + ip_address = "192.168.16.15" 2026-03-09 00:02:15.353113 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:15.353118 | orchestrator | } 2026-03-09 00:02:15.353123 | orchestrator | } 2026-03-09 00:02:15.353177 | orchestrator | 2026-03-09 00:02:15.353191 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-09 00:02:15.353196 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-09 00:02:15.353202 | orchestrator | + force_destroy = false 2026-03-09 00:02:15.353207 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.353212 | orchestrator | + port_id = (known after apply) 2026-03-09 00:02:15.353217 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.353222 | orchestrator | + router_id = (known after apply) 2026-03-09 00:02:15.353227 | orchestrator | + subnet_id = (known after apply) 2026-03-09 00:02:15.353231 | orchestrator | } 2026-03-09 00:02:15.353328 | orchestrator | 2026-03-09 00:02:15.353343 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-09 00:02:15.353349 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-09 00:02:15.353354 | orchestrator | + admin_state_up = (known after apply) 2026-03-09 00:02:15.353359 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:15.353363 | orchestrator | + availability_zone_hints = [ 2026-03-09 00:02:15.353368 | orchestrator | + "nova", 2026-03-09 00:02:15.353373 | orchestrator | ] 2026-03-09 00:02:15.353378 | orchestrator | + distributed = (known after apply) 2026-03-09 00:02:15.353382 | orchestrator | + enable_snat = (known after apply) 2026-03-09 00:02:15.353387 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-09 00:02:15.353392 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-09 00:02:15.353397 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.353401 | orchestrator | + name = "testbed" 2026-03-09 00:02:15.353406 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.353411 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:15.353416 | orchestrator | 2026-03-09 00:02:15.353421 | orchestrator | + external_fixed_ip (known after apply) 2026-03-09 00:02:15.353426 | orchestrator | } 2026-03-09 00:02:15.353519 | orchestrator | 2026-03-09 00:02:15.353533 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-09 00:02:15.353539 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-09 00:02:15.353544 | orchestrator | + description = "ssh" 2026-03-09 00:02:15.353548 | orchestrator | + direction = "ingress" 2026-03-09 00:02:15.353553 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:15.353558 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.353562 | orchestrator | + port_range_max = 22 2026-03-09 00:02:15.353567 | orchestrator | + port_range_min = 22 2026-03-09 00:02:15.353572 | orchestrator | + protocol = "tcp" 2026-03-09 00:02:15.353577 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.353586 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:15.353591 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:15.353595 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-09 00:02:15.353600 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:15.353605 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:15.353609 | orchestrator | } 2026-03-09 00:02:15.353705 | orchestrator | 2026-03-09 00:02:15.353719 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-09 00:02:15.353725 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-09 00:02:15.353743 | orchestrator | + description = "wireguard" 2026-03-09 00:02:15.353747 | orchestrator | + direction = "ingress" 2026-03-09 00:02:15.353752 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:15.353757 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.353762 | orchestrator | + port_range_max = 51820 2026-03-09 00:02:15.353767 | orchestrator | + port_range_min = 51820 2026-03-09 00:02:15.353771 | orchestrator | + protocol = "udp" 2026-03-09 00:02:15.353776 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.353781 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:15.353786 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:15.353791 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-09 00:02:15.353795 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:15.353800 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:15.353805 | orchestrator | } 2026-03-09 00:02:15.353882 | orchestrator | 2026-03-09 00:02:15.353897 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-09 00:02:15.353903 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-09 00:02:15.353908 | orchestrator | + direction = "ingress" 2026-03-09 00:02:15.353913 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:15.353917 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.353922 | orchestrator | + protocol = "tcp" 2026-03-09 00:02:15.353927 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.353931 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:15.353936 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:15.353941 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-09 00:02:15.353945 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:15.353950 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:15.353955 | orchestrator | } 2026-03-09 00:02:15.354055 | orchestrator | 2026-03-09 00:02:15.354072 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-09 00:02:15.354078 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-09 00:02:15.354083 | orchestrator | + direction = "ingress" 2026-03-09 00:02:15.354088 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:15.354093 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.354097 | orchestrator | + protocol = "udp" 2026-03-09 00:02:15.354102 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.354107 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:15.354112 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:15.354117 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-09 00:02:15.354122 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:15.354127 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:15.354132 | orchestrator | } 2026-03-09 00:02:15.354214 | orchestrator | 2026-03-09 00:02:15.354230 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-09 00:02:15.354240 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-09 00:02:15.354245 | orchestrator | + direction = "ingress" 2026-03-09 00:02:15.354250 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:15.354255 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.354260 | orchestrator | + protocol = "icmp" 2026-03-09 00:02:15.354265 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.354269 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:15.354274 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:15.354279 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-09 00:02:15.354283 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:15.354288 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:15.354293 | orchestrator | } 2026-03-09 00:02:15.354369 | orchestrator | 2026-03-09 00:02:15.354384 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-09 00:02:15.354390 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-09 00:02:15.354395 | orchestrator | + direction = "ingress" 2026-03-09 00:02:15.354400 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:15.354405 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.354410 | orchestrator | + protocol = "tcp" 2026-03-09 00:02:15.354415 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.354420 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:15.354428 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:15.354433 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-09 00:02:15.354438 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:15.354443 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:15.354447 | orchestrator | } 2026-03-09 00:02:15.354523 | orchestrator | 2026-03-09 00:02:15.354538 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-09 00:02:15.354544 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-09 00:02:15.354549 | orchestrator | + direction = "ingress" 2026-03-09 00:02:15.354554 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:15.354559 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.354563 | orchestrator | + protocol = "udp" 2026-03-09 00:02:15.354568 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.354573 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:15.354577 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:15.354582 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-09 00:02:15.354587 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:15.354592 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:15.354596 | orchestrator | } 2026-03-09 00:02:15.354673 | orchestrator | 2026-03-09 00:02:15.354689 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-09 00:02:15.354695 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-09 00:02:15.354700 | orchestrator | + direction = "ingress" 2026-03-09 00:02:15.354708 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:15.354713 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.354718 | orchestrator | + protocol = "icmp" 2026-03-09 00:02:15.354723 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.354741 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:15.354747 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:15.354751 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-09 00:02:15.354756 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:15.354761 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:15.354770 | orchestrator | } 2026-03-09 00:02:15.354850 | orchestrator | 2026-03-09 00:02:15.354865 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-09 00:02:15.354871 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-09 00:02:15.354876 | orchestrator | + description = "vrrp" 2026-03-09 00:02:15.354881 | orchestrator | + direction = "ingress" 2026-03-09 00:02:15.354885 | orchestrator | + ethertype = "IPv4" 2026-03-09 00:02:15.354890 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.354895 | orchestrator | + protocol = "112" 2026-03-09 00:02:15.354900 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.354904 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-09 00:02:15.354909 | orchestrator | + remote_group_id = (known after apply) 2026-03-09 00:02:15.354914 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-09 00:02:15.354919 | orchestrator | + security_group_id = (known after apply) 2026-03-09 00:02:15.354924 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:15.354928 | orchestrator | } 2026-03-09 00:02:15.354986 | orchestrator | 2026-03-09 00:02:15.355000 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-09 00:02:15.355006 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-09 00:02:15.355011 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:15.355016 | orchestrator | + description = "management security group" 2026-03-09 00:02:15.355021 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.355025 | orchestrator | + name = "testbed-management" 2026-03-09 00:02:15.355030 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.355035 | orchestrator | + stateful = (known after apply) 2026-03-09 00:02:15.355039 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:15.355044 | orchestrator | } 2026-03-09 00:02:15.355099 | orchestrator | 2026-03-09 00:02:15.355114 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-09 00:02:15.355119 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-09 00:02:15.355124 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:15.355129 | orchestrator | + description = "node security group" 2026-03-09 00:02:15.355134 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.355139 | orchestrator | + name = "testbed-node" 2026-03-09 00:02:15.355143 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.355148 | orchestrator | + stateful = (known after apply) 2026-03-09 00:02:15.355153 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:15.355157 | orchestrator | } 2026-03-09 00:02:15.355283 | orchestrator | 2026-03-09 00:02:15.355299 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-09 00:02:15.355304 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-09 00:02:15.355309 | orchestrator | + all_tags = (known after apply) 2026-03-09 00:02:15.355314 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-09 00:02:15.355319 | orchestrator | + dns_nameservers = [ 2026-03-09 00:02:15.355324 | orchestrator | + "8.8.8.8", 2026-03-09 00:02:15.355328 | orchestrator | + "9.9.9.9", 2026-03-09 00:02:15.355333 | orchestrator | ] 2026-03-09 00:02:15.355338 | orchestrator | + enable_dhcp = true 2026-03-09 00:02:15.355344 | orchestrator | + gateway_ip = (known after apply) 2026-03-09 00:02:15.355348 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.355353 | orchestrator | + ip_version = 4 2026-03-09 00:02:15.355359 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-09 00:02:15.355363 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-09 00:02:15.355368 | orchestrator | + name = "subnet-testbed-management" 2026-03-09 00:02:15.355373 | orchestrator | + network_id = (known after apply) 2026-03-09 00:02:15.355377 | orchestrator | + no_gateway = false 2026-03-09 00:02:15.355382 | orchestrator | + region = (known after apply) 2026-03-09 00:02:15.355387 | orchestrator | + service_types = (known after apply) 2026-03-09 00:02:15.355396 | orchestrator | + tenant_id = (known after apply) 2026-03-09 00:02:15.355401 | orchestrator | 2026-03-09 00:02:15.355406 | orchestrator | + allocation_pool { 2026-03-09 00:02:15.355410 | orchestrator | + end = "192.168.31.250" 2026-03-09 00:02:15.355415 | orchestrator | + start = "192.168.31.200" 2026-03-09 00:02:15.355420 | orchestrator | } 2026-03-09 00:02:15.355425 | orchestrator | } 2026-03-09 00:02:15.355463 | orchestrator | 2026-03-09 00:02:15.355480 | orchestrator | # terraform_data.image will be created 2026-03-09 00:02:15.355486 | orchestrator | + resource "terraform_data" "image" { 2026-03-09 00:02:15.355490 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.355495 | orchestrator | + input = "Ubuntu 24.04" 2026-03-09 00:02:15.355500 | orchestrator | + output = (known after apply) 2026-03-09 00:02:15.355505 | orchestrator | } 2026-03-09 00:02:15.355542 | orchestrator | 2026-03-09 00:02:15.355557 | orchestrator | # terraform_data.image_node will be created 2026-03-09 00:02:15.355563 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-09 00:02:15.355567 | orchestrator | + id = (known after apply) 2026-03-09 00:02:15.355572 | orchestrator | + input = "Ubuntu 24.04" 2026-03-09 00:02:15.355577 | orchestrator | + output = (known after apply) 2026-03-09 00:02:15.355582 | orchestrator | } 2026-03-09 00:02:15.355600 | orchestrator | 2026-03-09 00:02:15.355606 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-09 00:02:15.355621 | orchestrator | 2026-03-09 00:02:15.355626 | orchestrator | Changes to Outputs: 2026-03-09 00:02:15.355640 | orchestrator | + manager_address = (sensitive value) 2026-03-09 00:02:15.355645 | orchestrator | + private_key = (sensitive value) 2026-03-09 00:02:15.538198 | orchestrator | terraform_data.image_node: Creating... 2026-03-09 00:02:15.542512 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=a96c2aa0-9584-adab-16ba-7de4eecbec1b] 2026-03-09 00:02:15.543998 | orchestrator | terraform_data.image: Creating... 2026-03-09 00:02:15.544595 | orchestrator | terraform_data.image: Creation complete after 0s [id=5f358a8d-00fe-7afa-1ad0-90e4ff0f0525] 2026-03-09 00:02:15.551619 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-09 00:02:15.555004 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-09 00:02:15.556804 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-09 00:02:15.557658 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-09 00:02:15.560665 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-09 00:02:15.564032 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-09 00:02:15.570504 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-09 00:02:15.572140 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-09 00:02:15.583904 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-09 00:02:15.591053 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-09 00:02:16.033682 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-09 00:02:16.036713 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-09 00:02:16.039393 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-09 00:02:16.041846 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-09 00:02:16.044095 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-03-09 00:02:16.049188 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-09 00:02:16.745080 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=5a497d3e-ef74-4bda-9e04-74972060f009] 2026-03-09 00:02:16.751753 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-09 00:02:19.186811 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=11658218-3952-45bc-99ae-d48f4d257268] 2026-03-09 00:02:19.189812 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=3a13d83a-3534-4183-8691-9f150495a6dc] 2026-03-09 00:02:19.194467 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-09 00:02:19.202710 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-09 00:02:19.206660 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=0cde3168a9cf3ff9c2991a72b0b742d0e79753fc] 2026-03-09 00:02:19.214090 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-09 00:02:19.234966 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=069ee836-7f84-4f9f-9b43-0fd45db025c2] 2026-03-09 00:02:19.238851 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=d43c938e-9c3c-4e95-bc09-26edff92b810] 2026-03-09 00:02:19.242136 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=49ad7546-ef2d-4696-ae5b-c2e2e05846ff] 2026-03-09 00:02:19.250096 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-09 00:02:19.252514 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-09 00:02:19.268400 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-09 00:02:19.268513 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=763f54df-2df6-4a17-b758-6e7498448fae] 2026-03-09 00:02:19.275784 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-09 00:02:19.306697 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=26907958-5014-4e4e-aaae-f132ebc9345b] 2026-03-09 00:02:19.319645 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-09 00:02:19.323518 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=10864b219cd7bbeadcdb9e1c92ab153c0e840c0a] 2026-03-09 00:02:19.328075 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=34bdd215-cdf5-4909-8dd4-972bf1b79030] 2026-03-09 00:02:19.332664 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=709b939c-9ac4-47b1-b5c3-cb1d8710b2fd] 2026-03-09 00:02:19.332786 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-09 00:02:20.099030 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=4358574d-1ee3-4934-b4bc-139515f47f54] 2026-03-09 00:02:20.299018 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=28110447-8757-4652-ac25-4506991d29b9] 2026-03-09 00:02:20.303483 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-09 00:02:22.571787 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=3a731137-5cc5-4157-94da-3d583abc100b] 2026-03-09 00:02:23.731653 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=6df45922-8f75-4a42-8a21-8a577e31863a] 2026-03-09 00:02:23.731813 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=c3247b01-1045-414b-8d68-d46805c465ad] 2026-03-09 00:02:23.731830 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=d1ae549f-778a-485f-b059-8e9bc989d7ac] 2026-03-09 00:02:23.731840 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=bfa8e3a8-0734-434c-abec-79aad619d4fa] 2026-03-09 00:02:23.731850 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=d4e9789d-4787-4538-a188-9409f1cddce2] 2026-03-09 00:02:23.731892 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=e6c2919b-7c32-4331-8a23-c5de33979771] 2026-03-09 00:02:23.731904 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-09 00:02:23.731914 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-09 00:02:23.731924 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-09 00:02:23.731934 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=e08bcbb6-6ec2-4597-a960-ebeea0538348] 2026-03-09 00:02:23.731978 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-09 00:02:23.731989 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-09 00:02:23.731999 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-09 00:02:23.732027 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-09 00:02:23.732070 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-09 00:02:23.732080 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-09 00:02:23.732090 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-09 00:02:23.732100 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-09 00:02:23.801940 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=e5cfd4be-5161-4f05-80e7-c01b64aeb936] 2026-03-09 00:02:23.808090 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-09 00:02:23.977001 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=4113a84e-cd82-43b2-9823-f8e1dda59c5a] 2026-03-09 00:02:23.984662 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-09 00:02:24.209397 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 0s [id=d237163e-396a-4658-a807-5dbd640e52f9] 2026-03-09 00:02:24.218739 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-09 00:02:24.358029 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 0s [id=f4dbe76d-4d1a-423b-9a62-6cd0b05f899b] 2026-03-09 00:02:24.363318 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-09 00:02:24.449047 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 0s [id=f2479fa4-3aa4-466b-819d-30afc864480d] 2026-03-09 00:02:24.453431 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-09 00:02:24.476176 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 0s [id=068d508a-c943-4059-964d-e9635679a625] 2026-03-09 00:02:24.482928 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-09 00:02:24.540089 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=f51c3c1f-6774-498b-b079-d58c06e4aea1] 2026-03-09 00:02:24.548440 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-09 00:02:24.596020 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=d75c86e0-8d12-4e54-93ea-fea5cf1446e3] 2026-03-09 00:02:24.600239 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-09 00:02:24.764036 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=590aa2b4-84b9-48ed-823e-e71c74c656f3] 2026-03-09 00:02:24.819279 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=b8ad53d5-e40e-41e6-b8fd-5976ed841a2b] 2026-03-09 00:02:24.953556 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=fa070805-ab1d-4594-816a-aef19aa913cd] 2026-03-09 00:02:24.957877 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=c1702b2c-7c1b-47cf-8a9f-bfe92ab1d662] 2026-03-09 00:02:24.975496 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=28beb6c0-ec6f-4451-9f3e-3b716a984f4a] 2026-03-09 00:02:25.148058 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=089df60f-0b47-421c-9f31-a32fc8e9da30] 2026-03-09 00:02:25.237986 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=b9643155-b8cd-436e-b86b-02254fa4a413] 2026-03-09 00:02:25.286082 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=cbeb8641-38c2-4bd6-9133-603d44c43636] 2026-03-09 00:02:25.586432 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=2df7fcb5-f215-4288-82d5-3fe4e1ab8619] 2026-03-09 00:02:26.665567 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=b5086a21-51e4-4f06-b37c-11f2c3aeca36] 2026-03-09 00:02:26.686581 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-09 00:02:26.696425 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-09 00:02:26.697169 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-09 00:02:26.702115 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-09 00:02:26.702176 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-09 00:02:26.704015 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-09 00:02:26.711955 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-09 00:02:27.998811 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=66487aae-12c7-4955-baf4-de329f88163f] 2026-03-09 00:02:28.012707 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-09 00:02:28.017860 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-09 00:02:28.018978 | orchestrator | local_file.inventory: Creating... 2026-03-09 00:02:28.023549 | orchestrator | local_file.inventory: Creation complete after 0s [id=e793f981501a2d3e58126d47a71359f4dd9ea83b] 2026-03-09 00:02:28.024534 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=ac94be04f92dadd881de62b90b0fb3855afa9a71] 2026-03-09 00:02:28.774495 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=66487aae-12c7-4955-baf4-de329f88163f] 2026-03-09 00:02:36.697511 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-09 00:02:36.702612 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-09 00:02:36.704878 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-09 00:02:36.704948 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-09 00:02:36.704965 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-09 00:02:36.715341 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-09 00:02:46.706299 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-09 00:02:46.706434 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-09 00:02:46.706453 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-09 00:02:46.706466 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-09 00:02:46.706478 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-09 00:02:46.715630 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-09 00:02:47.256326 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=e7097194-8100-452e-a40f-b5b4f56a2a34] 2026-03-09 00:02:47.324908 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=29651b7b-ad82-4321-9a03-e43f452d3ce1] 2026-03-09 00:02:47.349375 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 20s [id=05e25f1a-5b3b-4e8f-a410-d93da47cb1a2] 2026-03-09 00:02:56.716007 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-09 00:02:56.716134 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-09 00:02:56.716150 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-09 00:02:57.357156 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=fc75b90c-592d-4243-8f02-17e5a8f4d181] 2026-03-09 00:02:57.543956 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=f8f2efd5-2235-4734-877f-bc8dfd6bcde3] 2026-03-09 00:02:57.813973 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=3e2b491e-8aab-48ef-be4a-792eacc701d9] 2026-03-09 00:02:57.832531 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-09 00:02:57.833596 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-09 00:02:57.834423 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-09 00:02:57.838924 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-09 00:02:57.844414 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=2569162125175001503] 2026-03-09 00:02:57.849235 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-09 00:02:57.855490 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-09 00:02:57.856969 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-09 00:02:57.857363 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-09 00:02:57.861900 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-09 00:02:57.870015 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-09 00:02:57.888397 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-09 00:03:01.253410 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=f8f2efd5-2235-4734-877f-bc8dfd6bcde3/069ee836-7f84-4f9f-9b43-0fd45db025c2] 2026-03-09 00:03:01.283860 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=e7097194-8100-452e-a40f-b5b4f56a2a34/49ad7546-ef2d-4696-ae5b-c2e2e05846ff] 2026-03-09 00:03:01.294832 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=f8f2efd5-2235-4734-877f-bc8dfd6bcde3/709b939c-9ac4-47b1-b5c3-cb1d8710b2fd] 2026-03-09 00:03:01.309466 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=29651b7b-ad82-4321-9a03-e43f452d3ce1/3a13d83a-3534-4183-8691-9f150495a6dc] 2026-03-09 00:03:01.332166 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=e7097194-8100-452e-a40f-b5b4f56a2a34/763f54df-2df6-4a17-b758-6e7498448fae] 2026-03-09 00:03:01.350905 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=29651b7b-ad82-4321-9a03-e43f452d3ce1/d43c938e-9c3c-4e95-bc09-26edff92b810] 2026-03-09 00:03:07.424445 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 9s [id=f8f2efd5-2235-4734-877f-bc8dfd6bcde3/34bdd215-cdf5-4909-8dd4-972bf1b79030] 2026-03-09 00:03:07.440872 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 9s [id=e7097194-8100-452e-a40f-b5b4f56a2a34/26907958-5014-4e4e-aaae-f132ebc9345b] 2026-03-09 00:03:07.461580 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 9s [id=29651b7b-ad82-4321-9a03-e43f452d3ce1/11658218-3952-45bc-99ae-d48f4d257268] 2026-03-09 00:03:07.887190 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-09 00:03:17.887670 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-09 00:03:27.888651 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [30s elapsed] 2026-03-09 00:03:37.889415 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [40s elapsed] 2026-03-09 00:03:38.912598 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 41s [id=5b8725f2-a279-4996-9296-a72725489956] 2026-03-09 00:03:38.937229 | orchestrator | 2026-03-09 00:03:38.937326 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-09 00:03:38.937342 | orchestrator | 2026-03-09 00:03:38.937354 | orchestrator | Outputs: 2026-03-09 00:03:38.937366 | orchestrator | 2026-03-09 00:03:38.937377 | orchestrator | manager_address = 2026-03-09 00:03:38.937389 | orchestrator | private_key = 2026-03-09 00:03:39.310871 | orchestrator | ok: Runtime: 0:01:29.229631 2026-03-09 00:03:39.348112 | 2026-03-09 00:03:39.348316 | TASK [Create infrastructure (stable)] 2026-03-09 00:03:39.885278 | orchestrator | skipping: Conditional result was False 2026-03-09 00:03:39.902548 | 2026-03-09 00:03:39.902709 | TASK [Fetch manager address] 2026-03-09 00:03:40.462417 | orchestrator | ok 2026-03-09 00:03:40.472426 | 2026-03-09 00:03:40.472767 | TASK [Set manager_host address] 2026-03-09 00:03:40.549536 | orchestrator | ok 2026-03-09 00:03:40.557225 | 2026-03-09 00:03:40.557343 | LOOP [Update ansible collections] 2026-03-09 00:03:50.172972 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-09 00:03:50.173308 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-09 00:03:50.173354 | orchestrator | Starting galaxy collection install process 2026-03-09 00:03:50.173380 | orchestrator | Process install dependency map 2026-03-09 00:03:50.173403 | orchestrator | Starting collection install process 2026-03-09 00:03:50.173423 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2026-03-09 00:03:50.173447 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2026-03-09 00:03:50.173477 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-09 00:03:50.173530 | orchestrator | ok: Item: commons Runtime: 0:00:09.304388 2026-03-09 00:03:53.106431 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-09 00:03:53.106816 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-09 00:03:53.106975 | orchestrator | Starting galaxy collection install process 2026-03-09 00:03:53.107039 | orchestrator | Process install dependency map 2026-03-09 00:03:53.107093 | orchestrator | Starting collection install process 2026-03-09 00:03:53.107147 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2026-03-09 00:03:53.107200 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2026-03-09 00:03:53.107252 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-09 00:03:53.107337 | orchestrator | ok: Item: services Runtime: 0:00:02.660521 2026-03-09 00:03:53.135313 | 2026-03-09 00:03:53.135471 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-09 00:04:03.732312 | orchestrator | ok 2026-03-09 00:04:03.743577 | 2026-03-09 00:04:03.743714 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-09 00:05:03.789982 | orchestrator | ok 2026-03-09 00:05:03.800179 | 2026-03-09 00:05:03.800312 | TASK [Fetch manager ssh hostkey] 2026-03-09 00:05:05.385261 | orchestrator | Output suppressed because no_log was given 2026-03-09 00:05:05.403585 | 2026-03-09 00:05:05.403831 | TASK [Get ssh keypair from terraform environment] 2026-03-09 00:05:05.947840 | orchestrator | ok: Runtime: 0:00:00.008164 2026-03-09 00:05:05.956902 | 2026-03-09 00:05:05.957131 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-09 00:05:06.006074 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-09 00:05:06.015511 | 2026-03-09 00:05:06.015652 | TASK [Run manager part 0] 2026-03-09 00:05:07.104855 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-09 00:05:07.197226 | orchestrator | 2026-03-09 00:05:07.197277 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-09 00:05:07.197288 | orchestrator | 2026-03-09 00:05:07.197305 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-09 00:05:09.206049 | orchestrator | ok: [testbed-manager] 2026-03-09 00:05:09.206109 | orchestrator | 2026-03-09 00:05:09.206142 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-09 00:05:09.206156 | orchestrator | 2026-03-09 00:05:09.206168 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:05:11.037603 | orchestrator | ok: [testbed-manager] 2026-03-09 00:05:11.037652 | orchestrator | 2026-03-09 00:05:11.037660 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-09 00:05:11.788651 | orchestrator | ok: [testbed-manager] 2026-03-09 00:05:11.788704 | orchestrator | 2026-03-09 00:05:11.788712 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-09 00:05:11.852748 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:05:11.852804 | orchestrator | 2026-03-09 00:05:11.852816 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-09 00:05:11.881712 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:05:11.881763 | orchestrator | 2026-03-09 00:05:11.881772 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-09 00:05:11.910625 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:05:11.910669 | orchestrator | 2026-03-09 00:05:11.910675 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-09 00:05:11.937748 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:05:11.937792 | orchestrator | 2026-03-09 00:05:11.937798 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-09 00:05:11.964518 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:05:11.964557 | orchestrator | 2026-03-09 00:05:11.964564 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-09 00:05:11.994669 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:05:11.994712 | orchestrator | 2026-03-09 00:05:11.994719 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-09 00:05:12.035822 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:05:12.035875 | orchestrator | 2026-03-09 00:05:12.035887 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-09 00:05:12.782844 | orchestrator | changed: [testbed-manager] 2026-03-09 00:05:12.782893 | orchestrator | 2026-03-09 00:05:12.782901 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-09 00:08:00.603862 | orchestrator | changed: [testbed-manager] 2026-03-09 00:08:00.603918 | orchestrator | 2026-03-09 00:08:00.603931 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-09 00:09:32.983224 | orchestrator | changed: [testbed-manager] 2026-03-09 00:09:32.983359 | orchestrator | 2026-03-09 00:09:32.983377 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-09 00:09:52.210927 | orchestrator | changed: [testbed-manager] 2026-03-09 00:09:52.210968 | orchestrator | 2026-03-09 00:09:52.210978 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-09 00:10:00.551824 | orchestrator | changed: [testbed-manager] 2026-03-09 00:10:00.551923 | orchestrator | 2026-03-09 00:10:00.551940 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-09 00:10:00.616594 | orchestrator | ok: [testbed-manager] 2026-03-09 00:10:00.616684 | orchestrator | 2026-03-09 00:10:00.616701 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-09 00:10:01.388986 | orchestrator | ok: [testbed-manager] 2026-03-09 00:10:01.389085 | orchestrator | 2026-03-09 00:10:01.389106 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-09 00:10:02.113461 | orchestrator | changed: [testbed-manager] 2026-03-09 00:10:02.113553 | orchestrator | 2026-03-09 00:10:02.113573 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-09 00:10:08.093833 | orchestrator | changed: [testbed-manager] 2026-03-09 00:10:08.093879 | orchestrator | 2026-03-09 00:10:08.093905 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-09 00:10:13.860376 | orchestrator | changed: [testbed-manager] 2026-03-09 00:10:13.860461 | orchestrator | 2026-03-09 00:10:13.860482 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-09 00:10:16.561014 | orchestrator | changed: [testbed-manager] 2026-03-09 00:10:16.561104 | orchestrator | 2026-03-09 00:10:16.561122 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-09 00:10:18.258066 | orchestrator | changed: [testbed-manager] 2026-03-09 00:10:18.258162 | orchestrator | 2026-03-09 00:10:18.258183 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-09 00:10:19.298589 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-09 00:10:19.298638 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-09 00:10:19.298646 | orchestrator | 2026-03-09 00:10:19.298653 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-09 00:10:19.344010 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-09 00:10:19.344100 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-09 00:10:19.344115 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-09 00:10:19.344129 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-09 00:10:25.772720 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-09 00:10:25.772802 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-09 00:10:25.772816 | orchestrator | 2026-03-09 00:10:25.772826 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-09 00:10:26.333076 | orchestrator | changed: [testbed-manager] 2026-03-09 00:10:26.333168 | orchestrator | 2026-03-09 00:10:26.333186 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-09 00:11:45.761949 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-09 00:11:45.762104 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-09 00:11:45.762127 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-09 00:11:45.762140 | orchestrator | 2026-03-09 00:11:45.762153 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-09 00:11:48.024193 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-09 00:11:48.024281 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-09 00:11:48.024296 | orchestrator | 2026-03-09 00:11:48.024308 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-09 00:11:48.024321 | orchestrator | 2026-03-09 00:11:48.024332 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:11:49.395984 | orchestrator | ok: [testbed-manager] 2026-03-09 00:11:49.396071 | orchestrator | 2026-03-09 00:11:49.396090 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-09 00:11:49.445707 | orchestrator | ok: [testbed-manager] 2026-03-09 00:11:49.445784 | orchestrator | 2026-03-09 00:11:49.445801 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-09 00:11:49.532397 | orchestrator | ok: [testbed-manager] 2026-03-09 00:11:49.532471 | orchestrator | 2026-03-09 00:11:49.532485 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-09 00:11:50.337665 | orchestrator | changed: [testbed-manager] 2026-03-09 00:11:50.337763 | orchestrator | 2026-03-09 00:11:50.337780 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-09 00:11:51.012914 | orchestrator | changed: [testbed-manager] 2026-03-09 00:11:51.013007 | orchestrator | 2026-03-09 00:11:51.013024 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-09 00:11:52.410171 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-09 00:11:52.410208 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-09 00:11:52.410214 | orchestrator | 2026-03-09 00:11:52.410227 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-09 00:11:53.782001 | orchestrator | changed: [testbed-manager] 2026-03-09 00:11:53.782099 | orchestrator | 2026-03-09 00:11:53.782108 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-09 00:11:55.532222 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-09 00:11:55.532313 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-09 00:11:55.532329 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-09 00:11:55.532343 | orchestrator | 2026-03-09 00:11:55.532357 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-09 00:11:55.593031 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:11:55.593082 | orchestrator | 2026-03-09 00:11:55.593090 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-09 00:11:55.673931 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:11:55.673986 | orchestrator | 2026-03-09 00:11:55.674000 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-09 00:11:56.241521 | orchestrator | changed: [testbed-manager] 2026-03-09 00:11:56.242291 | orchestrator | 2026-03-09 00:11:56.242335 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-09 00:11:56.310585 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:11:56.310671 | orchestrator | 2026-03-09 00:11:56.310686 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-09 00:11:57.169943 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-09 00:11:57.170073 | orchestrator | changed: [testbed-manager] 2026-03-09 00:11:57.170092 | orchestrator | 2026-03-09 00:11:57.170105 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-09 00:11:57.212724 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:11:57.212824 | orchestrator | 2026-03-09 00:11:57.212850 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-09 00:11:57.255420 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:11:57.255505 | orchestrator | 2026-03-09 00:11:57.255521 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-09 00:11:57.297092 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:11:57.297173 | orchestrator | 2026-03-09 00:11:57.297188 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-09 00:11:57.367093 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:11:57.367145 | orchestrator | 2026-03-09 00:11:57.367153 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-09 00:11:58.129931 | orchestrator | ok: [testbed-manager] 2026-03-09 00:11:58.130076 | orchestrator | 2026-03-09 00:11:58.130097 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-09 00:11:58.130110 | orchestrator | 2026-03-09 00:11:58.130122 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:11:59.453591 | orchestrator | ok: [testbed-manager] 2026-03-09 00:11:59.453627 | orchestrator | 2026-03-09 00:11:59.453634 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-09 00:12:00.409051 | orchestrator | changed: [testbed-manager] 2026-03-09 00:12:00.409088 | orchestrator | 2026-03-09 00:12:00.409093 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:12:00.409100 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-09 00:12:00.409104 | orchestrator | 2026-03-09 00:12:00.810483 | orchestrator | ok: Runtime: 0:06:54.198132 2026-03-09 00:12:00.832003 | 2026-03-09 00:12:00.832174 | TASK [Point out that the log in on the manager is now possible] 2026-03-09 00:12:00.880232 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-09 00:12:00.890276 | 2026-03-09 00:12:00.890459 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-09 00:12:00.924630 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-09 00:12:00.933133 | 2026-03-09 00:12:00.933245 | TASK [Run manager part 1 + 2] 2026-03-09 00:12:01.887065 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-09 00:12:01.947095 | orchestrator | 2026-03-09 00:12:01.947187 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-09 00:12:01.947206 | orchestrator | 2026-03-09 00:12:01.947238 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:12:04.862126 | orchestrator | ok: [testbed-manager] 2026-03-09 00:12:04.862223 | orchestrator | 2026-03-09 00:12:04.862575 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-09 00:12:04.902417 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:12:04.902495 | orchestrator | 2026-03-09 00:12:04.902514 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-09 00:12:04.954505 | orchestrator | ok: [testbed-manager] 2026-03-09 00:12:04.954594 | orchestrator | 2026-03-09 00:12:04.954614 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-09 00:12:05.013646 | orchestrator | ok: [testbed-manager] 2026-03-09 00:12:05.013729 | orchestrator | 2026-03-09 00:12:05.013749 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-09 00:12:05.099503 | orchestrator | ok: [testbed-manager] 2026-03-09 00:12:05.099574 | orchestrator | 2026-03-09 00:12:05.099593 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-09 00:12:05.166441 | orchestrator | ok: [testbed-manager] 2026-03-09 00:12:05.166543 | orchestrator | 2026-03-09 00:12:05.166571 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-09 00:12:05.227399 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-09 00:12:05.227467 | orchestrator | 2026-03-09 00:12:05.227478 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-09 00:12:05.945581 | orchestrator | ok: [testbed-manager] 2026-03-09 00:12:05.945670 | orchestrator | 2026-03-09 00:12:05.945687 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-09 00:12:05.995693 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:12:05.995750 | orchestrator | 2026-03-09 00:12:05.995757 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-09 00:12:07.382397 | orchestrator | changed: [testbed-manager] 2026-03-09 00:12:07.382620 | orchestrator | 2026-03-09 00:12:07.382639 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-09 00:12:07.965957 | orchestrator | ok: [testbed-manager] 2026-03-09 00:12:07.966056 | orchestrator | 2026-03-09 00:12:07.966071 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-09 00:12:09.088812 | orchestrator | changed: [testbed-manager] 2026-03-09 00:12:09.088895 | orchestrator | 2026-03-09 00:12:09.088910 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-09 00:12:23.579706 | orchestrator | changed: [testbed-manager] 2026-03-09 00:12:23.579752 | orchestrator | 2026-03-09 00:12:23.579758 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-09 00:12:24.308065 | orchestrator | ok: [testbed-manager] 2026-03-09 00:12:24.308152 | orchestrator | 2026-03-09 00:12:24.308172 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-09 00:12:24.369508 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:12:24.369583 | orchestrator | 2026-03-09 00:12:24.369596 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-09 00:12:25.329985 | orchestrator | changed: [testbed-manager] 2026-03-09 00:12:25.330072 | orchestrator | 2026-03-09 00:12:25.330085 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-09 00:12:26.265443 | orchestrator | changed: [testbed-manager] 2026-03-09 00:12:26.265514 | orchestrator | 2026-03-09 00:12:26.265525 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-09 00:12:26.811481 | orchestrator | changed: [testbed-manager] 2026-03-09 00:12:26.811515 | orchestrator | 2026-03-09 00:12:26.811520 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-09 00:12:26.844675 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-09 00:12:26.844734 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-09 00:12:26.844740 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-09 00:12:26.844745 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-09 00:12:30.043695 | orchestrator | changed: [testbed-manager] 2026-03-09 00:12:30.043776 | orchestrator | 2026-03-09 00:12:30.043787 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-09 00:12:38.852015 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-09 00:12:38.852064 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-09 00:12:38.852071 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-09 00:12:38.852076 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-09 00:12:38.852085 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-09 00:12:38.852089 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-09 00:12:38.852094 | orchestrator | 2026-03-09 00:12:38.852099 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-09 00:12:39.811802 | orchestrator | changed: [testbed-manager] 2026-03-09 00:12:39.811869 | orchestrator | 2026-03-09 00:12:39.811878 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-09 00:12:39.852626 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:12:39.852668 | orchestrator | 2026-03-09 00:12:39.852676 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-09 00:12:42.942619 | orchestrator | changed: [testbed-manager] 2026-03-09 00:12:42.942675 | orchestrator | 2026-03-09 00:12:42.942684 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-09 00:12:42.989802 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:12:42.989845 | orchestrator | 2026-03-09 00:12:42.989854 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-09 00:14:17.388188 | orchestrator | changed: [testbed-manager] 2026-03-09 00:14:17.388299 | orchestrator | 2026-03-09 00:14:17.388318 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-09 00:14:18.492877 | orchestrator | ok: [testbed-manager] 2026-03-09 00:14:18.492959 | orchestrator | 2026-03-09 00:14:18.492973 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:14:18.492986 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-09 00:14:18.492996 | orchestrator | 2026-03-09 00:14:19.080276 | orchestrator | ok: Runtime: 0:02:17.351201 2026-03-09 00:14:19.096969 | 2026-03-09 00:14:19.097107 | TASK [Reboot manager] 2026-03-09 00:14:20.638372 | orchestrator | ok: Runtime: 0:00:00.933938 2026-03-09 00:14:20.656323 | 2026-03-09 00:14:20.656487 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-09 00:14:37.084328 | orchestrator | ok 2026-03-09 00:14:37.094403 | 2026-03-09 00:14:37.094531 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-09 00:15:37.145426 | orchestrator | ok 2026-03-09 00:15:37.157023 | 2026-03-09 00:15:37.157167 | TASK [Deploy manager + bootstrap nodes] 2026-03-09 00:15:41.727668 | orchestrator | 2026-03-09 00:15:41.727886 | orchestrator | # DEPLOY MANAGER 2026-03-09 00:15:41.727904 | orchestrator | 2026-03-09 00:15:41.727913 | orchestrator | + set -e 2026-03-09 00:15:41.727921 | orchestrator | + echo 2026-03-09 00:15:41.727929 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-09 00:15:41.727940 | orchestrator | + echo 2026-03-09 00:15:41.727979 | orchestrator | + cat /opt/manager-vars.sh 2026-03-09 00:15:41.731262 | orchestrator | export NUMBER_OF_NODES=6 2026-03-09 00:15:41.731288 | orchestrator | 2026-03-09 00:15:41.731296 | orchestrator | export CEPH_VERSION=reef 2026-03-09 00:15:41.731305 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-09 00:15:41.731313 | orchestrator | export MANAGER_VERSION=latest 2026-03-09 00:15:41.731328 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-09 00:15:41.731335 | orchestrator | 2026-03-09 00:15:41.731346 | orchestrator | export ARA=false 2026-03-09 00:15:41.731353 | orchestrator | export DEPLOY_MODE=manager 2026-03-09 00:15:41.731363 | orchestrator | export TEMPEST=true 2026-03-09 00:15:41.731370 | orchestrator | export IS_ZUUL=true 2026-03-09 00:15:41.731376 | orchestrator | 2026-03-09 00:15:41.731387 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.100 2026-03-09 00:15:41.731394 | orchestrator | export EXTERNAL_API=false 2026-03-09 00:15:41.731400 | orchestrator | 2026-03-09 00:15:41.731407 | orchestrator | export IMAGE_USER=ubuntu 2026-03-09 00:15:41.731416 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-09 00:15:41.731422 | orchestrator | 2026-03-09 00:15:41.731428 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-09 00:15:41.731438 | orchestrator | 2026-03-09 00:15:41.731445 | orchestrator | + echo 2026-03-09 00:15:41.731455 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-09 00:15:41.732162 | orchestrator | ++ export INTERACTIVE=false 2026-03-09 00:15:41.732174 | orchestrator | ++ INTERACTIVE=false 2026-03-09 00:15:41.732182 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-09 00:15:41.732190 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-09 00:15:41.732463 | orchestrator | + source /opt/manager-vars.sh 2026-03-09 00:15:41.732472 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-09 00:15:41.732480 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-09 00:15:41.732535 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-09 00:15:41.732544 | orchestrator | ++ CEPH_VERSION=reef 2026-03-09 00:15:41.732550 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-09 00:15:41.732557 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-09 00:15:41.732563 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-09 00:15:41.732569 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-09 00:15:41.732575 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-09 00:15:41.732588 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-09 00:15:41.732711 | orchestrator | ++ export ARA=false 2026-03-09 00:15:41.732721 | orchestrator | ++ ARA=false 2026-03-09 00:15:41.732727 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-09 00:15:41.732733 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-09 00:15:41.732739 | orchestrator | ++ export TEMPEST=true 2026-03-09 00:15:41.732745 | orchestrator | ++ TEMPEST=true 2026-03-09 00:15:41.732752 | orchestrator | ++ export IS_ZUUL=true 2026-03-09 00:15:41.732758 | orchestrator | ++ IS_ZUUL=true 2026-03-09 00:15:41.732764 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.100 2026-03-09 00:15:41.732770 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.100 2026-03-09 00:15:41.732776 | orchestrator | ++ export EXTERNAL_API=false 2026-03-09 00:15:41.732783 | orchestrator | ++ EXTERNAL_API=false 2026-03-09 00:15:41.732789 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-09 00:15:41.732795 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-09 00:15:41.732801 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-09 00:15:41.732807 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-09 00:15:41.732813 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-09 00:15:41.732819 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-09 00:15:41.732826 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-09 00:15:41.781073 | orchestrator | + docker version 2026-03-09 00:15:41.873705 | orchestrator | Client: Docker Engine - Community 2026-03-09 00:15:41.873816 | orchestrator | Version: 27.5.1 2026-03-09 00:15:41.873840 | orchestrator | API version: 1.47 2026-03-09 00:15:41.873861 | orchestrator | Go version: go1.22.11 2026-03-09 00:15:41.873879 | orchestrator | Git commit: 9f9e405 2026-03-09 00:15:41.873898 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-09 00:15:41.873916 | orchestrator | OS/Arch: linux/amd64 2026-03-09 00:15:41.873934 | orchestrator | Context: default 2026-03-09 00:15:41.873953 | orchestrator | 2026-03-09 00:15:41.873971 | orchestrator | Server: Docker Engine - Community 2026-03-09 00:15:41.873991 | orchestrator | Engine: 2026-03-09 00:15:41.874010 | orchestrator | Version: 27.5.1 2026-03-09 00:15:41.874087 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-09 00:15:41.874129 | orchestrator | Go version: go1.22.11 2026-03-09 00:15:41.874140 | orchestrator | Git commit: 4c9b3b0 2026-03-09 00:15:41.874151 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-09 00:15:41.874162 | orchestrator | OS/Arch: linux/amd64 2026-03-09 00:15:41.874173 | orchestrator | Experimental: false 2026-03-09 00:15:41.874184 | orchestrator | containerd: 2026-03-09 00:15:41.874195 | orchestrator | Version: v2.2.1 2026-03-09 00:15:41.874206 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-03-09 00:15:41.874217 | orchestrator | runc: 2026-03-09 00:15:41.874228 | orchestrator | Version: 1.3.4 2026-03-09 00:15:41.874239 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-09 00:15:41.874251 | orchestrator | docker-init: 2026-03-09 00:15:41.874261 | orchestrator | Version: 0.19.0 2026-03-09 00:15:41.874273 | orchestrator | GitCommit: de40ad0 2026-03-09 00:15:41.876966 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-09 00:15:41.886300 | orchestrator | + set -e 2026-03-09 00:15:41.887810 | orchestrator | + source /opt/manager-vars.sh 2026-03-09 00:15:41.887852 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-09 00:15:41.887873 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-09 00:15:41.887892 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-09 00:15:41.887910 | orchestrator | ++ CEPH_VERSION=reef 2026-03-09 00:15:41.887930 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-09 00:15:41.887952 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-09 00:15:41.887976 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-09 00:15:41.887997 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-09 00:15:41.888017 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-09 00:15:41.888037 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-09 00:15:41.888055 | orchestrator | ++ export ARA=false 2026-03-09 00:15:41.888074 | orchestrator | ++ ARA=false 2026-03-09 00:15:41.888094 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-09 00:15:41.888114 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-09 00:15:41.888134 | orchestrator | ++ export TEMPEST=true 2026-03-09 00:15:41.888154 | orchestrator | ++ TEMPEST=true 2026-03-09 00:15:41.888173 | orchestrator | ++ export IS_ZUUL=true 2026-03-09 00:15:41.888191 | orchestrator | ++ IS_ZUUL=true 2026-03-09 00:15:41.888210 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.100 2026-03-09 00:15:41.888230 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.100 2026-03-09 00:15:41.888250 | orchestrator | ++ export EXTERNAL_API=false 2026-03-09 00:15:41.888270 | orchestrator | ++ EXTERNAL_API=false 2026-03-09 00:15:41.888289 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-09 00:15:41.888309 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-09 00:15:41.888328 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-09 00:15:41.888347 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-09 00:15:41.888367 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-09 00:15:41.888386 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-09 00:15:41.888404 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-09 00:15:41.888423 | orchestrator | ++ export INTERACTIVE=false 2026-03-09 00:15:41.888443 | orchestrator | ++ INTERACTIVE=false 2026-03-09 00:15:41.888459 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-09 00:15:41.888480 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-09 00:15:41.888549 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-09 00:15:41.888570 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-09 00:15:41.888587 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-03-09 00:15:41.894192 | orchestrator | + set -e 2026-03-09 00:15:41.894274 | orchestrator | + VERSION=reef 2026-03-09 00:15:41.895178 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-09 00:15:41.903139 | orchestrator | + [[ -n ceph_version: reef ]] 2026-03-09 00:15:41.903185 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-03-09 00:15:41.908286 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-03-09 00:15:41.913333 | orchestrator | + set -e 2026-03-09 00:15:41.913390 | orchestrator | + VERSION=2024.2 2026-03-09 00:15:41.914220 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-09 00:15:41.917753 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-03-09 00:15:41.917830 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-03-09 00:15:41.924184 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-09 00:15:41.924696 | orchestrator | ++ semver latest 7.0.0 2026-03-09 00:15:41.991030 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-09 00:15:41.991120 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-09 00:15:41.991134 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-09 00:15:41.991526 | orchestrator | ++ semver latest 10.0.0-0 2026-03-09 00:15:42.052895 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-09 00:15:42.053576 | orchestrator | ++ semver 2024.2 2025.1 2026-03-09 00:15:42.115127 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-09 00:15:42.115221 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-09 00:15:42.205159 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-09 00:15:42.206234 | orchestrator | + source /opt/venv/bin/activate 2026-03-09 00:15:42.207662 | orchestrator | ++ deactivate nondestructive 2026-03-09 00:15:42.207707 | orchestrator | ++ '[' -n '' ']' 2026-03-09 00:15:42.207720 | orchestrator | ++ '[' -n '' ']' 2026-03-09 00:15:42.207732 | orchestrator | ++ hash -r 2026-03-09 00:15:42.207743 | orchestrator | ++ '[' -n '' ']' 2026-03-09 00:15:42.207754 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-09 00:15:42.207765 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-09 00:15:42.207778 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-09 00:15:42.207797 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-09 00:15:42.207809 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-09 00:15:42.207910 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-09 00:15:42.207925 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-09 00:15:42.207937 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-09 00:15:42.207949 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-09 00:15:42.207960 | orchestrator | ++ export PATH 2026-03-09 00:15:42.207975 | orchestrator | ++ '[' -n '' ']' 2026-03-09 00:15:42.208127 | orchestrator | ++ '[' -z '' ']' 2026-03-09 00:15:42.208142 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-09 00:15:42.208153 | orchestrator | ++ PS1='(venv) ' 2026-03-09 00:15:42.208164 | orchestrator | ++ export PS1 2026-03-09 00:15:42.208174 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-09 00:15:42.208186 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-09 00:15:42.208256 | orchestrator | ++ hash -r 2026-03-09 00:15:42.208406 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-09 00:15:43.306313 | orchestrator | 2026-03-09 00:15:43.306424 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-09 00:15:43.306440 | orchestrator | 2026-03-09 00:15:43.306452 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-09 00:15:43.838971 | orchestrator | ok: [testbed-manager] 2026-03-09 00:15:43.839076 | orchestrator | 2026-03-09 00:15:43.839093 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-09 00:15:44.801942 | orchestrator | changed: [testbed-manager] 2026-03-09 00:15:44.802051 | orchestrator | 2026-03-09 00:15:44.802061 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-09 00:15:44.802067 | orchestrator | 2026-03-09 00:15:44.802072 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:15:48.095298 | orchestrator | ok: [testbed-manager] 2026-03-09 00:15:48.095419 | orchestrator | 2026-03-09 00:15:48.095435 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-09 00:15:48.147039 | orchestrator | ok: [testbed-manager] 2026-03-09 00:15:48.147140 | orchestrator | 2026-03-09 00:15:48.147160 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-09 00:15:48.607908 | orchestrator | changed: [testbed-manager] 2026-03-09 00:15:48.607989 | orchestrator | 2026-03-09 00:15:48.608000 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-09 00:15:48.652822 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:15:48.652921 | orchestrator | 2026-03-09 00:15:48.652938 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-09 00:15:48.992038 | orchestrator | changed: [testbed-manager] 2026-03-09 00:15:48.992139 | orchestrator | 2026-03-09 00:15:48.992154 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-09 00:15:49.321094 | orchestrator | ok: [testbed-manager] 2026-03-09 00:15:49.321192 | orchestrator | 2026-03-09 00:15:49.321210 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-09 00:15:49.429218 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:15:49.429310 | orchestrator | 2026-03-09 00:15:49.429324 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-09 00:15:49.429336 | orchestrator | 2026-03-09 00:15:49.429347 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:15:52.118244 | orchestrator | ok: [testbed-manager] 2026-03-09 00:15:52.118365 | orchestrator | 2026-03-09 00:15:52.118392 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-09 00:15:52.211671 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-09 00:15:52.211756 | orchestrator | 2026-03-09 00:15:52.211772 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-09 00:15:52.266849 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-09 00:15:52.266958 | orchestrator | 2026-03-09 00:15:52.266984 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-09 00:15:53.405935 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-09 00:15:53.407088 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-09 00:15:53.407151 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-09 00:15:53.407162 | orchestrator | 2026-03-09 00:15:53.407172 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-09 00:15:55.184461 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-09 00:15:55.184589 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-09 00:15:55.184603 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-09 00:15:55.184614 | orchestrator | 2026-03-09 00:15:55.184620 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-09 00:15:55.827794 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-09 00:15:55.827895 | orchestrator | changed: [testbed-manager] 2026-03-09 00:15:55.827912 | orchestrator | 2026-03-09 00:15:55.827924 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-09 00:15:56.468951 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-09 00:15:56.469050 | orchestrator | changed: [testbed-manager] 2026-03-09 00:15:56.469067 | orchestrator | 2026-03-09 00:15:56.469080 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-09 00:15:56.523497 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:15:56.523630 | orchestrator | 2026-03-09 00:15:56.523646 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-09 00:15:56.883929 | orchestrator | ok: [testbed-manager] 2026-03-09 00:15:56.884030 | orchestrator | 2026-03-09 00:15:56.884046 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-09 00:15:56.952027 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-09 00:15:56.952122 | orchestrator | 2026-03-09 00:15:56.952139 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-09 00:15:57.983788 | orchestrator | changed: [testbed-manager] 2026-03-09 00:15:57.983879 | orchestrator | 2026-03-09 00:15:57.983895 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-09 00:15:58.827854 | orchestrator | changed: [testbed-manager] 2026-03-09 00:15:58.827930 | orchestrator | 2026-03-09 00:15:58.827942 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-09 00:16:14.689921 | orchestrator | changed: [testbed-manager] 2026-03-09 00:16:14.690159 | orchestrator | 2026-03-09 00:16:14.690202 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-09 00:16:14.737402 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:16:14.737496 | orchestrator | 2026-03-09 00:16:14.737513 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-09 00:16:14.737556 | orchestrator | 2026-03-09 00:16:14.737567 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:16:16.489435 | orchestrator | ok: [testbed-manager] 2026-03-09 00:16:16.489624 | orchestrator | 2026-03-09 00:16:16.489700 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-09 00:16:16.944511 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-09 00:16:16.944649 | orchestrator | 2026-03-09 00:16:16.944679 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-09 00:16:16.999874 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-09 00:16:16.999985 | orchestrator | 2026-03-09 00:16:17.000009 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-09 00:16:19.278615 | orchestrator | ok: [testbed-manager] 2026-03-09 00:16:19.278777 | orchestrator | 2026-03-09 00:16:19.278793 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-09 00:16:19.333613 | orchestrator | ok: [testbed-manager] 2026-03-09 00:16:19.333698 | orchestrator | 2026-03-09 00:16:19.333711 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-09 00:16:19.458230 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-09 00:16:19.458340 | orchestrator | 2026-03-09 00:16:19.458359 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-09 00:16:22.208905 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-09 00:16:22.209007 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-09 00:16:22.209022 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-09 00:16:22.209035 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-09 00:16:22.209046 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-09 00:16:22.209057 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-09 00:16:22.209068 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-09 00:16:22.209079 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-09 00:16:22.209091 | orchestrator | 2026-03-09 00:16:22.209103 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-09 00:16:22.827184 | orchestrator | changed: [testbed-manager] 2026-03-09 00:16:22.827285 | orchestrator | 2026-03-09 00:16:22.827301 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-09 00:16:23.442600 | orchestrator | changed: [testbed-manager] 2026-03-09 00:16:23.442698 | orchestrator | 2026-03-09 00:16:23.442714 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-09 00:16:23.519791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-09 00:16:23.519898 | orchestrator | 2026-03-09 00:16:23.519923 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-09 00:16:24.639970 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-09 00:16:24.640066 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-09 00:16:24.640079 | orchestrator | 2026-03-09 00:16:24.640090 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-09 00:16:25.247239 | orchestrator | changed: [testbed-manager] 2026-03-09 00:16:25.247334 | orchestrator | 2026-03-09 00:16:25.247352 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-09 00:16:25.294353 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:16:25.294445 | orchestrator | 2026-03-09 00:16:25.294461 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-09 00:16:25.364913 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-09 00:16:25.365001 | orchestrator | 2026-03-09 00:16:25.365016 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-09 00:16:25.975961 | orchestrator | changed: [testbed-manager] 2026-03-09 00:16:25.976072 | orchestrator | 2026-03-09 00:16:25.976098 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-09 00:16:26.041480 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-09 00:16:26.041623 | orchestrator | 2026-03-09 00:16:26.041640 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-09 00:16:27.314867 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-09 00:16:27.314954 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-09 00:16:27.314967 | orchestrator | changed: [testbed-manager] 2026-03-09 00:16:27.314980 | orchestrator | 2026-03-09 00:16:27.314990 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-09 00:16:27.942954 | orchestrator | changed: [testbed-manager] 2026-03-09 00:16:27.943050 | orchestrator | 2026-03-09 00:16:27.943061 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-09 00:16:28.004023 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:16:28.004136 | orchestrator | 2026-03-09 00:16:28.004160 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-09 00:16:28.096153 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-09 00:16:28.096243 | orchestrator | 2026-03-09 00:16:28.096258 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-09 00:16:28.601218 | orchestrator | changed: [testbed-manager] 2026-03-09 00:16:28.601309 | orchestrator | 2026-03-09 00:16:28.601347 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-09 00:16:28.958731 | orchestrator | changed: [testbed-manager] 2026-03-09 00:16:28.958797 | orchestrator | 2026-03-09 00:16:28.958804 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-09 00:16:30.125337 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-09 00:16:30.125429 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-09 00:16:30.125444 | orchestrator | 2026-03-09 00:16:30.125457 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-09 00:16:30.771950 | orchestrator | changed: [testbed-manager] 2026-03-09 00:16:30.772051 | orchestrator | 2026-03-09 00:16:30.772069 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-09 00:16:31.135223 | orchestrator | ok: [testbed-manager] 2026-03-09 00:16:31.135377 | orchestrator | 2026-03-09 00:16:31.135394 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-09 00:16:31.474305 | orchestrator | changed: [testbed-manager] 2026-03-09 00:16:31.474412 | orchestrator | 2026-03-09 00:16:31.474433 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-09 00:16:31.517466 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:16:31.517581 | orchestrator | 2026-03-09 00:16:31.517597 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-09 00:16:31.585932 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-09 00:16:31.586064 | orchestrator | 2026-03-09 00:16:31.586083 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-09 00:16:31.626561 | orchestrator | ok: [testbed-manager] 2026-03-09 00:16:31.626642 | orchestrator | 2026-03-09 00:16:31.626657 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-09 00:16:33.515949 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-09 00:16:33.516046 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-09 00:16:33.516061 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-09 00:16:33.516074 | orchestrator | 2026-03-09 00:16:33.516086 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-09 00:16:34.190760 | orchestrator | changed: [testbed-manager] 2026-03-09 00:16:34.190885 | orchestrator | 2026-03-09 00:16:34.190912 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-09 00:16:34.856768 | orchestrator | changed: [testbed-manager] 2026-03-09 00:16:34.856895 | orchestrator | 2026-03-09 00:16:34.856922 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-09 00:16:35.534294 | orchestrator | changed: [testbed-manager] 2026-03-09 00:16:35.534400 | orchestrator | 2026-03-09 00:16:35.534418 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-09 00:16:35.600655 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-09 00:16:35.600777 | orchestrator | 2026-03-09 00:16:35.600795 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-09 00:16:35.646752 | orchestrator | ok: [testbed-manager] 2026-03-09 00:16:35.646852 | orchestrator | 2026-03-09 00:16:35.646868 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-09 00:16:36.308955 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-09 00:16:36.309055 | orchestrator | 2026-03-09 00:16:36.309072 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-09 00:16:36.378401 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-09 00:16:36.378490 | orchestrator | 2026-03-09 00:16:36.378512 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-09 00:16:37.038457 | orchestrator | changed: [testbed-manager] 2026-03-09 00:16:37.038584 | orchestrator | 2026-03-09 00:16:37.038602 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-09 00:16:37.609950 | orchestrator | ok: [testbed-manager] 2026-03-09 00:16:37.610105 | orchestrator | 2026-03-09 00:16:37.610123 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-09 00:16:37.667058 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:16:37.667151 | orchestrator | 2026-03-09 00:16:37.667166 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-09 00:16:37.717932 | orchestrator | ok: [testbed-manager] 2026-03-09 00:16:37.718056 | orchestrator | 2026-03-09 00:16:37.718074 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-09 00:16:38.519742 | orchestrator | changed: [testbed-manager] 2026-03-09 00:16:38.519839 | orchestrator | 2026-03-09 00:16:38.519855 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-09 00:17:51.723186 | orchestrator | changed: [testbed-manager] 2026-03-09 00:17:51.723334 | orchestrator | 2026-03-09 00:17:51.723358 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-09 00:17:52.637262 | orchestrator | ok: [testbed-manager] 2026-03-09 00:17:52.637367 | orchestrator | 2026-03-09 00:17:52.637383 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-09 00:17:52.677379 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:17:52.677466 | orchestrator | 2026-03-09 00:17:52.677477 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-09 00:17:55.063485 | orchestrator | changed: [testbed-manager] 2026-03-09 00:17:55.063654 | orchestrator | 2026-03-09 00:17:55.063675 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-09 00:17:55.157033 | orchestrator | ok: [testbed-manager] 2026-03-09 00:17:55.157129 | orchestrator | 2026-03-09 00:17:55.157173 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-09 00:17:55.157187 | orchestrator | 2026-03-09 00:17:55.157197 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-09 00:17:55.214799 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:17:55.214883 | orchestrator | 2026-03-09 00:17:55.214893 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-09 00:18:55.267151 | orchestrator | Pausing for 60 seconds 2026-03-09 00:18:55.267243 | orchestrator | changed: [testbed-manager] 2026-03-09 00:18:55.267254 | orchestrator | 2026-03-09 00:18:55.267262 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-09 00:18:58.306549 | orchestrator | changed: [testbed-manager] 2026-03-09 00:18:58.306701 | orchestrator | 2026-03-09 00:18:58.306717 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-09 00:19:39.794775 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-09 00:19:39.794876 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-09 00:19:39.794885 | orchestrator | changed: [testbed-manager] 2026-03-09 00:19:39.794936 | orchestrator | 2026-03-09 00:19:39.794942 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-09 00:19:50.162196 | orchestrator | changed: [testbed-manager] 2026-03-09 00:19:50.162312 | orchestrator | 2026-03-09 00:19:50.162324 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-09 00:19:50.266851 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-09 00:19:50.266980 | orchestrator | 2026-03-09 00:19:50.266998 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-09 00:19:50.267010 | orchestrator | 2026-03-09 00:19:50.267022 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-09 00:19:50.321637 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:19:50.321735 | orchestrator | 2026-03-09 00:19:50.321750 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-09 00:19:50.401159 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-09 00:19:50.401256 | orchestrator | 2026-03-09 00:19:50.401271 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-09 00:19:51.176196 | orchestrator | changed: [testbed-manager] 2026-03-09 00:19:51.176295 | orchestrator | 2026-03-09 00:19:51.176310 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-09 00:19:54.353271 | orchestrator | ok: [testbed-manager] 2026-03-09 00:19:54.353367 | orchestrator | 2026-03-09 00:19:54.353383 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-09 00:19:54.427163 | orchestrator | ok: [testbed-manager] => { 2026-03-09 00:19:54.427255 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-09 00:19:54.427270 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-09 00:19:54.427283 | orchestrator | "Checking running containers against expected versions...", 2026-03-09 00:19:54.427297 | orchestrator | "", 2026-03-09 00:19:54.427312 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-09 00:19:54.427323 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-09 00:19:54.427335 | orchestrator | " Enabled: true", 2026-03-09 00:19:54.427346 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-09 00:19:54.427357 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:19:54.427368 | orchestrator | "", 2026-03-09 00:19:54.427379 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-09 00:19:54.427391 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-03-09 00:19:54.427401 | orchestrator | " Enabled: true", 2026-03-09 00:19:54.427412 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-03-09 00:19:54.427423 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:19:54.427434 | orchestrator | "", 2026-03-09 00:19:54.427444 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-09 00:19:54.427455 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-09 00:19:54.427466 | orchestrator | " Enabled: true", 2026-03-09 00:19:54.427477 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-09 00:19:54.427487 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:19:54.427498 | orchestrator | "", 2026-03-09 00:19:54.427509 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-09 00:19:54.427520 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-09 00:19:54.427531 | orchestrator | " Enabled: true", 2026-03-09 00:19:54.427542 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-09 00:19:54.427553 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:19:54.427564 | orchestrator | "", 2026-03-09 00:19:54.427574 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-09 00:19:54.427585 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-09 00:19:54.427664 | orchestrator | " Enabled: true", 2026-03-09 00:19:54.427678 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-09 00:19:54.427689 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:19:54.427700 | orchestrator | "", 2026-03-09 00:19:54.427713 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-09 00:19:54.427727 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-09 00:19:54.427740 | orchestrator | " Enabled: true", 2026-03-09 00:19:54.427751 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-09 00:19:54.427765 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:19:54.427777 | orchestrator | "", 2026-03-09 00:19:54.427790 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-09 00:19:54.427804 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-09 00:19:54.427817 | orchestrator | " Enabled: true", 2026-03-09 00:19:54.427829 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-09 00:19:54.427842 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:19:54.427854 | orchestrator | "", 2026-03-09 00:19:54.427867 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-09 00:19:54.427880 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-09 00:19:54.427893 | orchestrator | " Enabled: true", 2026-03-09 00:19:54.427905 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-09 00:19:54.427917 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:19:54.427930 | orchestrator | "", 2026-03-09 00:19:54.427951 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-09 00:19:54.427964 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-03-09 00:19:54.427981 | orchestrator | " Enabled: true", 2026-03-09 00:19:54.427994 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-03-09 00:19:54.428007 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:19:54.428019 | orchestrator | "", 2026-03-09 00:19:54.428033 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-09 00:19:54.428045 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-09 00:19:54.428058 | orchestrator | " Enabled: true", 2026-03-09 00:19:54.428070 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-09 00:19:54.428081 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:19:54.428092 | orchestrator | "", 2026-03-09 00:19:54.428103 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-09 00:19:54.428114 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-09 00:19:54.428124 | orchestrator | " Enabled: true", 2026-03-09 00:19:54.428135 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-09 00:19:54.428146 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:19:54.428157 | orchestrator | "", 2026-03-09 00:19:54.428167 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-09 00:19:54.428178 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-09 00:19:54.428189 | orchestrator | " Enabled: true", 2026-03-09 00:19:54.428199 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-09 00:19:54.428210 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:19:54.428221 | orchestrator | "", 2026-03-09 00:19:54.428231 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-09 00:19:54.428242 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-09 00:19:54.428253 | orchestrator | " Enabled: true", 2026-03-09 00:19:54.428263 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-09 00:19:54.428274 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:19:54.428285 | orchestrator | "", 2026-03-09 00:19:54.428295 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-09 00:19:54.428383 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-09 00:19:54.428397 | orchestrator | " Enabled: true", 2026-03-09 00:19:54.428408 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-09 00:19:54.428428 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:19:54.428439 | orchestrator | "", 2026-03-09 00:19:54.428450 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-09 00:19:54.428477 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-09 00:19:54.428488 | orchestrator | " Enabled: true", 2026-03-09 00:19:54.428499 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-09 00:19:54.428510 | orchestrator | " Status: ✅ MATCH", 2026-03-09 00:19:54.428521 | orchestrator | "", 2026-03-09 00:19:54.428532 | orchestrator | "=== Summary ===", 2026-03-09 00:19:54.428543 | orchestrator | "Errors (version mismatches): 0", 2026-03-09 00:19:54.428553 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-09 00:19:54.428564 | orchestrator | "", 2026-03-09 00:19:54.428575 | orchestrator | "✅ All running containers match expected versions!" 2026-03-09 00:19:54.428586 | orchestrator | ] 2026-03-09 00:19:54.428597 | orchestrator | } 2026-03-09 00:19:54.428609 | orchestrator | 2026-03-09 00:19:54.428644 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-09 00:19:54.482196 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:19:54.482274 | orchestrator | 2026-03-09 00:19:54.482284 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:19:54.482294 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-09 00:19:54.482307 | orchestrator | 2026-03-09 00:19:54.573566 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-09 00:19:54.573737 | orchestrator | + deactivate 2026-03-09 00:19:54.573757 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-09 00:19:54.573773 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-09 00:19:54.573785 | orchestrator | + export PATH 2026-03-09 00:19:54.573796 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-09 00:19:54.573808 | orchestrator | + '[' -n '' ']' 2026-03-09 00:19:54.573818 | orchestrator | + hash -r 2026-03-09 00:19:54.573829 | orchestrator | + '[' -n '' ']' 2026-03-09 00:19:54.573840 | orchestrator | + unset VIRTUAL_ENV 2026-03-09 00:19:54.573851 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-09 00:19:54.573862 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-09 00:19:54.573873 | orchestrator | + unset -f deactivate 2026-03-09 00:19:54.573884 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-09 00:19:54.582878 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-09 00:19:54.582979 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-09 00:19:54.582995 | orchestrator | + local max_attempts=60 2026-03-09 00:19:54.583008 | orchestrator | + local name=ceph-ansible 2026-03-09 00:19:54.583019 | orchestrator | + local attempt_num=1 2026-03-09 00:19:54.583929 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:19:54.620840 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:19:54.620932 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-09 00:19:54.620946 | orchestrator | + local max_attempts=60 2026-03-09 00:19:54.620958 | orchestrator | + local name=kolla-ansible 2026-03-09 00:19:54.620969 | orchestrator | + local attempt_num=1 2026-03-09 00:19:54.620980 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-09 00:19:54.658453 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:19:54.658572 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-09 00:19:54.658596 | orchestrator | + local max_attempts=60 2026-03-09 00:19:54.658641 | orchestrator | + local name=osism-ansible 2026-03-09 00:19:54.658663 | orchestrator | + local attempt_num=1 2026-03-09 00:19:54.659245 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-09 00:19:54.702282 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:19:54.702373 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-09 00:19:54.702387 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-09 00:19:55.362194 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-09 00:19:55.539261 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-09 00:19:55.539442 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-03-09 00:19:55.539463 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-03-09 00:19:55.539476 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-03-09 00:19:55.539486 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up About a minute (healthy) 8000/tcp 2026-03-09 00:19:55.539494 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up About a minute (healthy) 2026-03-09 00:19:55.539501 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up About a minute (healthy) 2026-03-09 00:19:55.539508 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up 57 seconds (healthy) 2026-03-09 00:19:55.539530 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up About a minute (healthy) 2026-03-09 00:19:55.539538 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up About a minute (healthy) 3306/tcp 2026-03-09 00:19:55.539545 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up About a minute (healthy) 2026-03-09 00:19:55.539552 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up About a minute (healthy) 6379/tcp 2026-03-09 00:19:55.539560 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-03-09 00:19:55.539567 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-03-09 00:19:55.539574 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-03-09 00:19:55.539581 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up About a minute (healthy) 2026-03-09 00:19:55.545647 | orchestrator | ++ semver latest 7.0.0 2026-03-09 00:19:55.595421 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-09 00:19:55.595513 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-09 00:19:55.595523 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-09 00:19:55.599674 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-09 00:20:07.717043 | orchestrator | 2026-03-09 00:20:07 | INFO  | Prepare task for execution of resolvconf. 2026-03-09 00:20:07.918685 | orchestrator | 2026-03-09 00:20:07 | INFO  | Task 93d7f999-199d-491b-9fce-f1d127b0fe83 (resolvconf) was prepared for execution. 2026-03-09 00:20:07.918770 | orchestrator | 2026-03-09 00:20:07 | INFO  | It takes a moment until task 93d7f999-199d-491b-9fce-f1d127b0fe83 (resolvconf) has been started and output is visible here. 2026-03-09 00:20:21.442771 | orchestrator | 2026-03-09 00:20:21.442853 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-09 00:20:21.442862 | orchestrator | 2026-03-09 00:20:21.442868 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:20:21.442874 | orchestrator | Monday 09 March 2026 00:20:11 +0000 (0:00:00.101) 0:00:00.101 ********** 2026-03-09 00:20:21.442880 | orchestrator | ok: [testbed-manager] 2026-03-09 00:20:21.442887 | orchestrator | 2026-03-09 00:20:21.442892 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-09 00:20:21.442898 | orchestrator | Monday 09 March 2026 00:20:15 +0000 (0:00:03.689) 0:00:03.791 ********** 2026-03-09 00:20:21.442904 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:20:21.442909 | orchestrator | 2026-03-09 00:20:21.442915 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-09 00:20:21.442920 | orchestrator | Monday 09 March 2026 00:20:15 +0000 (0:00:00.067) 0:00:03.858 ********** 2026-03-09 00:20:21.442925 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-09 00:20:21.442932 | orchestrator | 2026-03-09 00:20:21.442937 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-09 00:20:21.442942 | orchestrator | Monday 09 March 2026 00:20:15 +0000 (0:00:00.081) 0:00:03.940 ********** 2026-03-09 00:20:21.442947 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-09 00:20:21.442952 | orchestrator | 2026-03-09 00:20:21.442964 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-09 00:20:21.442970 | orchestrator | Monday 09 March 2026 00:20:15 +0000 (0:00:00.081) 0:00:04.021 ********** 2026-03-09 00:20:21.442975 | orchestrator | ok: [testbed-manager] 2026-03-09 00:20:21.442980 | orchestrator | 2026-03-09 00:20:21.442985 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-09 00:20:21.442991 | orchestrator | Monday 09 March 2026 00:20:16 +0000 (0:00:01.044) 0:00:05.066 ********** 2026-03-09 00:20:21.442996 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:20:21.443001 | orchestrator | 2026-03-09 00:20:21.443006 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-09 00:20:21.443011 | orchestrator | Monday 09 March 2026 00:20:16 +0000 (0:00:00.064) 0:00:05.131 ********** 2026-03-09 00:20:21.443016 | orchestrator | ok: [testbed-manager] 2026-03-09 00:20:21.443021 | orchestrator | 2026-03-09 00:20:21.443026 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-09 00:20:21.443031 | orchestrator | Monday 09 March 2026 00:20:17 +0000 (0:00:00.509) 0:00:05.641 ********** 2026-03-09 00:20:21.443036 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:20:21.443041 | orchestrator | 2026-03-09 00:20:21.443047 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-09 00:20:21.443052 | orchestrator | Monday 09 March 2026 00:20:17 +0000 (0:00:00.074) 0:00:05.715 ********** 2026-03-09 00:20:21.443057 | orchestrator | changed: [testbed-manager] 2026-03-09 00:20:21.443062 | orchestrator | 2026-03-09 00:20:21.443067 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-09 00:20:21.443073 | orchestrator | Monday 09 March 2026 00:20:17 +0000 (0:00:00.533) 0:00:06.249 ********** 2026-03-09 00:20:21.443078 | orchestrator | changed: [testbed-manager] 2026-03-09 00:20:21.443083 | orchestrator | 2026-03-09 00:20:21.443088 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-09 00:20:21.443093 | orchestrator | Monday 09 March 2026 00:20:19 +0000 (0:00:01.087) 0:00:07.336 ********** 2026-03-09 00:20:21.443098 | orchestrator | ok: [testbed-manager] 2026-03-09 00:20:21.443103 | orchestrator | 2026-03-09 00:20:21.443108 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-09 00:20:21.443128 | orchestrator | Monday 09 March 2026 00:20:20 +0000 (0:00:00.944) 0:00:08.280 ********** 2026-03-09 00:20:21.443133 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-09 00:20:21.443138 | orchestrator | 2026-03-09 00:20:21.443143 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-09 00:20:21.443148 | orchestrator | Monday 09 March 2026 00:20:20 +0000 (0:00:00.076) 0:00:08.357 ********** 2026-03-09 00:20:21.443154 | orchestrator | changed: [testbed-manager] 2026-03-09 00:20:21.443159 | orchestrator | 2026-03-09 00:20:21.443164 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:20:21.443170 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-09 00:20:21.443176 | orchestrator | 2026-03-09 00:20:21.443181 | orchestrator | 2026-03-09 00:20:21.443186 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:20:21.443191 | orchestrator | Monday 09 March 2026 00:20:21 +0000 (0:00:01.123) 0:00:09.481 ********** 2026-03-09 00:20:21.443196 | orchestrator | =============================================================================== 2026-03-09 00:20:21.443201 | orchestrator | Gathering Facts --------------------------------------------------------- 3.69s 2026-03-09 00:20:21.443206 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.12s 2026-03-09 00:20:21.443211 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.09s 2026-03-09 00:20:21.443216 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.04s 2026-03-09 00:20:21.443221 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.94s 2026-03-09 00:20:21.443226 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.53s 2026-03-09 00:20:21.443242 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.51s 2026-03-09 00:20:21.443248 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-03-09 00:20:21.443253 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-03-09 00:20:21.443258 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-03-09 00:20:21.443263 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2026-03-09 00:20:21.443268 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-03-09 00:20:21.443273 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-03-09 00:20:21.742717 | orchestrator | + osism apply sshconfig 2026-03-09 00:20:33.733084 | orchestrator | 2026-03-09 00:20:33 | INFO  | Prepare task for execution of sshconfig. 2026-03-09 00:20:33.839112 | orchestrator | 2026-03-09 00:20:33 | INFO  | Task b8abf4ff-85de-466d-bc32-9a6440c0f1e9 (sshconfig) was prepared for execution. 2026-03-09 00:20:33.839209 | orchestrator | 2026-03-09 00:20:33 | INFO  | It takes a moment until task b8abf4ff-85de-466d-bc32-9a6440c0f1e9 (sshconfig) has been started and output is visible here. 2026-03-09 00:20:45.485169 | orchestrator | 2026-03-09 00:20:45.485302 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-09 00:20:45.485320 | orchestrator | 2026-03-09 00:20:45.485332 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-09 00:20:45.485363 | orchestrator | Monday 09 March 2026 00:20:38 +0000 (0:00:00.161) 0:00:00.161 ********** 2026-03-09 00:20:45.485422 | orchestrator | ok: [testbed-manager] 2026-03-09 00:20:45.485438 | orchestrator | 2026-03-09 00:20:45.485450 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-09 00:20:45.485461 | orchestrator | Monday 09 March 2026 00:20:38 +0000 (0:00:00.552) 0:00:00.713 ********** 2026-03-09 00:20:45.485498 | orchestrator | changed: [testbed-manager] 2026-03-09 00:20:45.485511 | orchestrator | 2026-03-09 00:20:45.485521 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-09 00:20:45.485532 | orchestrator | Monday 09 March 2026 00:20:39 +0000 (0:00:00.544) 0:00:01.258 ********** 2026-03-09 00:20:45.485543 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-09 00:20:45.485554 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-09 00:20:45.485565 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-09 00:20:45.485575 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-09 00:20:45.485586 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-09 00:20:45.485596 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-09 00:20:45.485606 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-09 00:20:45.485617 | orchestrator | 2026-03-09 00:20:45.485627 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-09 00:20:45.485685 | orchestrator | Monday 09 March 2026 00:20:44 +0000 (0:00:05.511) 0:00:06.770 ********** 2026-03-09 00:20:45.485706 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:20:45.485727 | orchestrator | 2026-03-09 00:20:45.485746 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-09 00:20:45.485760 | orchestrator | Monday 09 March 2026 00:20:44 +0000 (0:00:00.104) 0:00:06.874 ********** 2026-03-09 00:20:45.485774 | orchestrator | changed: [testbed-manager] 2026-03-09 00:20:45.485786 | orchestrator | 2026-03-09 00:20:45.485799 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:20:45.485814 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:20:45.485827 | orchestrator | 2026-03-09 00:20:45.485839 | orchestrator | 2026-03-09 00:20:45.485851 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:20:45.485863 | orchestrator | Monday 09 March 2026 00:20:45 +0000 (0:00:00.534) 0:00:07.409 ********** 2026-03-09 00:20:45.485876 | orchestrator | =============================================================================== 2026-03-09 00:20:45.485889 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.51s 2026-03-09 00:20:45.485902 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.55s 2026-03-09 00:20:45.485914 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.54s 2026-03-09 00:20:45.485927 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.53s 2026-03-09 00:20:45.485939 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.10s 2026-03-09 00:20:45.777126 | orchestrator | + osism apply known-hosts 2026-03-09 00:20:57.825174 | orchestrator | 2026-03-09 00:20:57 | INFO  | Prepare task for execution of known-hosts. 2026-03-09 00:20:57.893418 | orchestrator | 2026-03-09 00:20:57 | INFO  | Task 1a0cc39b-5796-43b2-a9f8-1b889a3a0c4f (known-hosts) was prepared for execution. 2026-03-09 00:20:57.893529 | orchestrator | 2026-03-09 00:20:57 | INFO  | It takes a moment until task 1a0cc39b-5796-43b2-a9f8-1b889a3a0c4f (known-hosts) has been started and output is visible here. 2026-03-09 00:21:12.884855 | orchestrator | 2026-03-09 00:21:12.884961 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-09 00:21:12.884979 | orchestrator | 2026-03-09 00:21:12.884992 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-09 00:21:12.885004 | orchestrator | Monday 09 March 2026 00:21:01 +0000 (0:00:00.116) 0:00:00.116 ********** 2026-03-09 00:21:12.885015 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-09 00:21:12.885027 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-09 00:21:12.885037 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-09 00:21:12.885072 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-09 00:21:12.885083 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-09 00:21:12.885094 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-09 00:21:12.885105 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-09 00:21:12.885115 | orchestrator | 2026-03-09 00:21:12.885127 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-09 00:21:12.885139 | orchestrator | Monday 09 March 2026 00:21:07 +0000 (0:00:05.567) 0:00:05.683 ********** 2026-03-09 00:21:12.885161 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-09 00:21:12.885175 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-09 00:21:12.885186 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-09 00:21:12.885197 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-09 00:21:12.885208 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-09 00:21:12.885218 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-09 00:21:12.885229 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-09 00:21:12.885240 | orchestrator | 2026-03-09 00:21:12.885251 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:21:12.885270 | orchestrator | Monday 09 March 2026 00:21:07 +0000 (0:00:00.160) 0:00:05.844 ********** 2026-03-09 00:21:12.885293 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHvDfp7s/txBKZEdTWUq22I0qTdrQJnznDTlWNKXUKXzzY8RnTTo5uS53+UUSH0sGUEkyNlf5+llpVlDUgv3w8ejSOlUXkCLVEdOkeP72YkiN2ZWj3yXuBT6+KK3+Vuy5pzlRIFPpWPNeC1CrswV733ZK1sSFgtLTNEiP3RSiAiwiHrFkZfgYsHJ7xrfQ4cp0RnlrMINJVlsjtDr4dApTbhiaez+7V/YcC54DPYD6O1A/Ji7UY0jB48JD8XXb1FNvWIPnSxyP4Zh/j810RZVCyyoXVD2GIPlX8PprmMfv3oY9cY6IfEY4iLxlxcPZyl1ZkRS+dXUiEY+E/qgf7uV9W4hfn2qxzwabXYZ/9yqDwxfLcpgX0FVddd0YGO1BD32Q+Lf7yykN50PQbp81Wx+8XYHrPgjWOEUnjUvCKS+jn2jAMDURC5gMqXja7gLf0iOq5ksUS4Yl8Ev+eRkvYg8Wa/MJ4cCrfiLNxNuXfeFz3tp4/yPCkmfP0bGNvgTLYiI0=) 2026-03-09 00:21:12.885316 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPjngoqfNDPHKPv7n93F/LBeQQ9RMvv9f3lHYWGUcQmcpYNUA9FS8nw2BzFy25/KuQTxIWkI2vSoSTDx1kgo/w4=) 2026-03-09 00:21:12.885338 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPNQtdl+zvfYDN4IcsltL4peFCeRy6kBIYKKbsRCVSFD) 2026-03-09 00:21:12.885358 | orchestrator | 2026-03-09 00:21:12.885378 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:21:12.885396 | orchestrator | Monday 09 March 2026 00:21:08 +0000 (0:00:01.107) 0:00:06.952 ********** 2026-03-09 00:21:12.885416 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOlUSxFNo99JPOSH7PRn3zgVAT18H5CaRWgROfejyspxfVmVZGPda304xKmT0NxTKexYvTTwa2mQaK8NB1zfyKg=) 2026-03-09 00:21:12.885436 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE8PirrmYtmm3B+Fy34fkP1qW0cx53Mt1NgwyMYqfPSm) 2026-03-09 00:21:12.885508 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCx2Mj4UuJRGYLWG6GE/hFkqG/6ER8+DqbpOlvAZcFXNhcnVxyxgT/dbfDh6Y5MvhzkgwjbHvnRPT0DhHDhdSZNNqilTcxAoiw4rENxo89dLg4TYzexo0q8PxLNnrWaK0Nw18YFGKdKNGZ8AOCwhFNvbi1vrVtK9i9z7dJDliUb6qZcBI/DbIKhNiPUReo4Qt7JZe/s4tuMpqTjHDAZIATLfG4+QDdpJY71dC7tk49qIRdxgfmzhTIGL1mfA1U/QDXJYuGbluUMYVbhEtAhfuFLcvQ3feAcihJUm53lj7zstdw4j8a70oyUDcsKyv0Vcpjx7tx3+ccdFmKd8Ye/jAAHcCIezcuNS2mGkAXaXxov9ZQxaNGgroMbh9IzIsKsSPBgiE2U8M74eI+0KYn6pYwcxFlB2bqjBi09v1FX9HCXZIhBO567OwTyL1an2UdFd/Mwjo9SohVmohJTGuL/g3WbVGhU63msHcsl+BGSD1JDDZEzCarve57iD5c6+wMC7xs=) 2026-03-09 00:21:12.885532 | orchestrator | 2026-03-09 00:21:12.885553 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:21:12.885573 | orchestrator | Monday 09 March 2026 00:21:09 +0000 (0:00:01.052) 0:00:08.004 ********** 2026-03-09 00:21:12.885596 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDugySskmfA4jdRE9wZNgzwJmoiakYa2AtYLqANqq0712x1ZjQ3EcfRqBVRM9iHlahfkNPyc3KdaH84GAdFWPYC4FxhaVANsF2H1m/CjK6HCbRQdldqzZJmp5Fd1/Ro3aPgRlmFcUUbs4gZoa21PsRiaMJnkvKeau+EXxyVLITMR7JpGQnQFWlHgvxmTwp/DslLcPYedY+96/xP33gFdDzvnmhl8dDOJ8rjJX+XKlliU9WsktEOSbeqJxV2omequURYSuk9FcqjbaoaPkXKPAn8YaE2QP/p8goEi7a0P1HEZkbJw6RPnn3VBirHvL6+27X5ujpm6+CTJEB6YzRlg8aBrzmxXfUkMNP2CohD4N6+E0KcLnXzgJXsYSuAEFmsRQ8Dg9Lu/uOSEoNq8vejxM9tIblQXDZ6xF7QYwdbxcf1ps4HhUopTzi/gKk5YZQFyihNdj8NbGGxgJwlFOkNF7zTItIQw2rHkf97VSlnjzBftsrI34O0XMb/RQ8hs/giczk=) 2026-03-09 00:21:12.885617 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFHQx/nlvRI4+wDr0rJSB6MZyzfu9sy8BJPVtugJu0/Mq2FhWKfOhkQjKW3AqS/NiUIGTNOyxyG7JyxzxcPg+0c=) 2026-03-09 00:21:12.885762 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICQQvqBcIEEwW9o6xXgU/L18GS5Hw3lkS/QBlQxUB9FP) 2026-03-09 00:21:12.885788 | orchestrator | 2026-03-09 00:21:12.885808 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:21:12.885826 | orchestrator | Monday 09 March 2026 00:21:10 +0000 (0:00:01.033) 0:00:09.037 ********** 2026-03-09 00:21:12.885850 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXD8SLJDHwgdfGUWuWIT+RgFVbB5qJ89m9drSZf4WCpsMgUIkHgD+t2n4akmHP1SwxcBqmQlmPtnhsLC4qOfZjmsvqoVH+gkGZ0I+ihXWbx4JSHIgu5GfeMneEisVZFiw5Qv1JSx5e+NvXgFaqt2H5JfHNuIqHB+P8j3ZV+K5HrlqSMYx0N8VgvIKzMldo9ecfHejt1+fSWtRqsgsEWmg7DIZm2SZ3pRMQVtMKFyNdfdP6DcdnKX376dyhICEAtC7osUHP+KuYdCvKeXBCVfvW7UdHKEYHeAcvzUTp55bH0sxmX1tkzmMDsMcBu7DFz8PwQ9FKYupIIJVRNtb8cixPdWWPG70vgdTlV7gagVF2JHZe2x/jcjaMJXsXUjUaX4jCjJ99lnzdCvI2EIL/R1RBqPtkHfUPjmWndqBcvUb3SaPonkInubPJgwpkDF9HSBSPhwsYEP/PI7rOvaSD5jCkhxLDCZNKTf4lh74DQ4CsdbuQdEjfGORVLZTPvsoikzs=) 2026-03-09 00:21:12.885863 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDBm3qYNCZhk7wfkqruJ59UEd8AqnkGpTDqrZTabCWN9YoEwRyvo0Lz/zIrFI2rV7OHa3HInpTGY/RrSnjurwqg=) 2026-03-09 00:21:12.885874 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ3dQ/dhch4OVgvQ0nnfGwEOtUhpSKEaAbgk26yGf6wI) 2026-03-09 00:21:12.885885 | orchestrator | 2026-03-09 00:21:12.885896 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:21:12.885906 | orchestrator | Monday 09 March 2026 00:21:11 +0000 (0:00:01.035) 0:00:10.073 ********** 2026-03-09 00:21:12.885917 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEUKHUkEsMHqlB/uNsWZ8IxtBnT1a5bdJY2Y7+80KTgv) 2026-03-09 00:21:12.885929 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCV1jBW6S20a/T7cAoiKpPYnKDfpPQArv/Z/1F4cUWoskjKkQe64K9DN8VdahER7QGbbmB1UZixGugb+JA3fclaA6YWG8tJGfML+DwagDamv1RPR4fOcr5wYxJ6hc94+b+Qy09AcCVIYnJs3mHHmdPQQSgkRG0Gnufmy/HpZ/l5XIo4bMVD+Ni6DmdK3Shv2jHF8JaEF+5ygvv/e9Vz409600EuCAOAsRyTWtOeRZaP/Tftjfy3Oe79vIP92/DgXc5tuXLF2qCXSNOOleONCB6XnUQ4Hck6/rvAqgvZMcKNGUfUJ1o3vl+fXpMHzeTk7rOowpUjH2RBnGSTb3S/r3JN7rxl9UTETb7SImLhWLn3Zeh6CfzSkshg0XfsCluAsZyQNozSVcfSJqS2YbtAfyJuR1jEmbgZGbWapcTWnVDZwiLxDw9URd8guAl3IuOvaPNW7W6/Xq6iKY/8WioX2xEqIN5axmBcpsFAlIKNm0G97fFMqWlZCFTZbv5DniAl/h8=) 2026-03-09 00:21:12.885950 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPwIXKgdlI+iBFw55xGBqq0zejzlGl9oEQxFAWL6UD3MQ0C0S0pCOnk+yrY2DqYZj5iVviq5DD73Q06qlHP38io=) 2026-03-09 00:21:12.885961 | orchestrator | 2026-03-09 00:21:12.885972 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:21:12.885983 | orchestrator | Monday 09 March 2026 00:21:12 +0000 (0:00:01.019) 0:00:11.093 ********** 2026-03-09 00:21:12.886004 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI94H5VpOx+Da2H49wgVOJPgbpF0N8Pn+dIVr5meu0FC1S9vXs65c8Wkw2uXl7DRB4cq+4xi6dhrw4x1dQ84d5s=) 2026-03-09 00:21:23.749362 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2VBJZR5FIY17/Qlw6V+PIc8AvAhqVJCWHe8KmEDAVS9KXgViC6e0lYvtCbAgDMX78TR4TGPdOrmKyE9VlnFXIMNVW4UNBXq2ES6906ap8iXSQPZDHofCEPFW9z07mdcAZkWNK1FaSh/hBr1Fr/FkMHhEmk7LDRl0pgufkWiB2pH7HC5Vtvh6sqgXW/nJzZbvb482bOfVHNe1iCpDbd1RA4WOif+lKJxGXzgYujT+exY3rcWQ0EAknBJ1wFrahvIvHRkOFmAHaD/8/NjxTcMPaJmIG+DDYL0OAuTMcx8NXXcj2Ub2xnGjugi32DTx/s7vknkBgV+3oJ/E4kyfeUUTVKNuS/OL//bZfDO2d/+sSFVaAsBF85Zbch87aeAwRuTdZVvh1lAx4NDS88dT68HhTJB+Fyek1INBJnaospWe5x5o6pIjA3hGbHYN3YEtJ99tgcJ/bgmdGBaZlijOi2bK60mnXj2pSdmyVr7f+/r4tyTJgBBgGnklVvwTrYfEyYyU=) 2026-03-09 00:21:23.749472 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJGctnS9c4NDVXmdgL84aSyMKjz1nfXflBUqfN1PZk3F) 2026-03-09 00:21:23.749498 | orchestrator | 2026-03-09 00:21:23.749516 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:21:23.749536 | orchestrator | Monday 09 March 2026 00:21:13 +0000 (0:00:01.026) 0:00:12.120 ********** 2026-03-09 00:21:23.749557 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZuqllRFJPoGsKz9CPk4qxyDq+funXyJ+CHJ2hb15Qz7vElhSAXiNcB9PWu/MUPohcs1na8nhrX/9xb2rLcOJoT6EeD1EZvJhIv7VBkLUWE8ykYs6kD2wqMJzVhjArdbP8Sp4EuNrGBo1G1wAURAeA5L3aa1ptcCWlLlUvlm1dBq+N5ZrzPc+VR0UVrmj3ka9/lPbXS36cFHrX2k/OuXUvyJCLoFfnSFfz7Eidunb6pghZ/6VgoeQ4R02nIlxBiRyKC+RJdcfUr6psKyd2dwnzn6Z4QdG8br+aXw8ECyYKePh3DNncl5wq1Qtt1DRDryWhLU6riD277jlkaIM2tcGQvfBHcdCGwCWE+W/ikiDwPKD6EengcvmnW4DTMIfc7FBOjIk7P9XDT5ft+tY6LHiWio9zI9UJd7Ux5Yh7pSqFKXRE/5QGfPRqFau+7lC6Pfff0mpcGnfsD4c8T6a7kx+JiygihkBus4J8NVJrJaVy0zBtbfmEcHfzdxf1Q87d97M=) 2026-03-09 00:21:23.749578 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPFg2kDCYAkSRh5LAqUqG66hYEwotQRkaUu7Iqz6YPbzQ4kA6/GcIAeXQ9wYz+Gj2WDS5HaRLqD7lrIG0MyIq4A=) 2026-03-09 00:21:23.749598 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEUC/f+M/b6WK36u+Bk5+9nqrp/n4G6imOAS5QZKctxD) 2026-03-09 00:21:23.749617 | orchestrator | 2026-03-09 00:21:23.749636 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-09 00:21:23.749683 | orchestrator | Monday 09 March 2026 00:21:14 +0000 (0:00:01.038) 0:00:13.159 ********** 2026-03-09 00:21:23.749704 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-09 00:21:23.749724 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-09 00:21:23.749736 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-09 00:21:23.749747 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-09 00:21:23.749758 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-09 00:21:23.749786 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-09 00:21:23.749798 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-09 00:21:23.749809 | orchestrator | 2026-03-09 00:21:23.749877 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-09 00:21:23.749889 | orchestrator | Monday 09 March 2026 00:21:19 +0000 (0:00:05.204) 0:00:18.363 ********** 2026-03-09 00:21:23.749901 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-09 00:21:23.749913 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-09 00:21:23.749926 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-09 00:21:23.749939 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-09 00:21:23.749952 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-09 00:21:23.749965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-09 00:21:23.749978 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-09 00:21:23.749990 | orchestrator | 2026-03-09 00:21:23.750065 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:21:23.750081 | orchestrator | Monday 09 March 2026 00:21:19 +0000 (0:00:00.171) 0:00:18.534 ********** 2026-03-09 00:21:23.750094 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPjngoqfNDPHKPv7n93F/LBeQQ9RMvv9f3lHYWGUcQmcpYNUA9FS8nw2BzFy25/KuQTxIWkI2vSoSTDx1kgo/w4=) 2026-03-09 00:21:23.750109 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHvDfp7s/txBKZEdTWUq22I0qTdrQJnznDTlWNKXUKXzzY8RnTTo5uS53+UUSH0sGUEkyNlf5+llpVlDUgv3w8ejSOlUXkCLVEdOkeP72YkiN2ZWj3yXuBT6+KK3+Vuy5pzlRIFPpWPNeC1CrswV733ZK1sSFgtLTNEiP3RSiAiwiHrFkZfgYsHJ7xrfQ4cp0RnlrMINJVlsjtDr4dApTbhiaez+7V/YcC54DPYD6O1A/Ji7UY0jB48JD8XXb1FNvWIPnSxyP4Zh/j810RZVCyyoXVD2GIPlX8PprmMfv3oY9cY6IfEY4iLxlxcPZyl1ZkRS+dXUiEY+E/qgf7uV9W4hfn2qxzwabXYZ/9yqDwxfLcpgX0FVddd0YGO1BD32Q+Lf7yykN50PQbp81Wx+8XYHrPgjWOEUnjUvCKS+jn2jAMDURC5gMqXja7gLf0iOq5ksUS4Yl8Ev+eRkvYg8Wa/MJ4cCrfiLNxNuXfeFz3tp4/yPCkmfP0bGNvgTLYiI0=) 2026-03-09 00:21:23.750123 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPNQtdl+zvfYDN4IcsltL4peFCeRy6kBIYKKbsRCVSFD) 2026-03-09 00:21:23.750135 | orchestrator | 2026-03-09 00:21:23.750148 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:21:23.750161 | orchestrator | Monday 09 March 2026 00:21:20 +0000 (0:00:00.984) 0:00:19.519 ********** 2026-03-09 00:21:23.750174 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOlUSxFNo99JPOSH7PRn3zgVAT18H5CaRWgROfejyspxfVmVZGPda304xKmT0NxTKexYvTTwa2mQaK8NB1zfyKg=) 2026-03-09 00:21:23.750187 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCx2Mj4UuJRGYLWG6GE/hFkqG/6ER8+DqbpOlvAZcFXNhcnVxyxgT/dbfDh6Y5MvhzkgwjbHvnRPT0DhHDhdSZNNqilTcxAoiw4rENxo89dLg4TYzexo0q8PxLNnrWaK0Nw18YFGKdKNGZ8AOCwhFNvbi1vrVtK9i9z7dJDliUb6qZcBI/DbIKhNiPUReo4Qt7JZe/s4tuMpqTjHDAZIATLfG4+QDdpJY71dC7tk49qIRdxgfmzhTIGL1mfA1U/QDXJYuGbluUMYVbhEtAhfuFLcvQ3feAcihJUm53lj7zstdw4j8a70oyUDcsKyv0Vcpjx7tx3+ccdFmKd8Ye/jAAHcCIezcuNS2mGkAXaXxov9ZQxaNGgroMbh9IzIsKsSPBgiE2U8M74eI+0KYn6pYwcxFlB2bqjBi09v1FX9HCXZIhBO567OwTyL1an2UdFd/Mwjo9SohVmohJTGuL/g3WbVGhU63msHcsl+BGSD1JDDZEzCarve57iD5c6+wMC7xs=) 2026-03-09 00:21:23.750210 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE8PirrmYtmm3B+Fy34fkP1qW0cx53Mt1NgwyMYqfPSm) 2026-03-09 00:21:23.750222 | orchestrator | 2026-03-09 00:21:23.750235 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:21:23.750248 | orchestrator | Monday 09 March 2026 00:21:21 +0000 (0:00:01.028) 0:00:20.548 ********** 2026-03-09 00:21:23.750262 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFHQx/nlvRI4+wDr0rJSB6MZyzfu9sy8BJPVtugJu0/Mq2FhWKfOhkQjKW3AqS/NiUIGTNOyxyG7JyxzxcPg+0c=) 2026-03-09 00:21:23.750276 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDugySskmfA4jdRE9wZNgzwJmoiakYa2AtYLqANqq0712x1ZjQ3EcfRqBVRM9iHlahfkNPyc3KdaH84GAdFWPYC4FxhaVANsF2H1m/CjK6HCbRQdldqzZJmp5Fd1/Ro3aPgRlmFcUUbs4gZoa21PsRiaMJnkvKeau+EXxyVLITMR7JpGQnQFWlHgvxmTwp/DslLcPYedY+96/xP33gFdDzvnmhl8dDOJ8rjJX+XKlliU9WsktEOSbeqJxV2omequURYSuk9FcqjbaoaPkXKPAn8YaE2QP/p8goEi7a0P1HEZkbJw6RPnn3VBirHvL6+27X5ujpm6+CTJEB6YzRlg8aBrzmxXfUkMNP2CohD4N6+E0KcLnXzgJXsYSuAEFmsRQ8Dg9Lu/uOSEoNq8vejxM9tIblQXDZ6xF7QYwdbxcf1ps4HhUopTzi/gKk5YZQFyihNdj8NbGGxgJwlFOkNF7zTItIQw2rHkf97VSlnjzBftsrI34O0XMb/RQ8hs/giczk=) 2026-03-09 00:21:23.750287 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICQQvqBcIEEwW9o6xXgU/L18GS5Hw3lkS/QBlQxUB9FP) 2026-03-09 00:21:23.750298 | orchestrator | 2026-03-09 00:21:23.750309 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:21:23.750320 | orchestrator | Monday 09 March 2026 00:21:23 +0000 (0:00:01.093) 0:00:21.641 ********** 2026-03-09 00:21:23.750331 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ3dQ/dhch4OVgvQ0nnfGwEOtUhpSKEaAbgk26yGf6wI) 2026-03-09 00:21:23.750363 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXD8SLJDHwgdfGUWuWIT+RgFVbB5qJ89m9drSZf4WCpsMgUIkHgD+t2n4akmHP1SwxcBqmQlmPtnhsLC4qOfZjmsvqoVH+gkGZ0I+ihXWbx4JSHIgu5GfeMneEisVZFiw5Qv1JSx5e+NvXgFaqt2H5JfHNuIqHB+P8j3ZV+K5HrlqSMYx0N8VgvIKzMldo9ecfHejt1+fSWtRqsgsEWmg7DIZm2SZ3pRMQVtMKFyNdfdP6DcdnKX376dyhICEAtC7osUHP+KuYdCvKeXBCVfvW7UdHKEYHeAcvzUTp55bH0sxmX1tkzmMDsMcBu7DFz8PwQ9FKYupIIJVRNtb8cixPdWWPG70vgdTlV7gagVF2JHZe2x/jcjaMJXsXUjUaX4jCjJ99lnzdCvI2EIL/R1RBqPtkHfUPjmWndqBcvUb3SaPonkInubPJgwpkDF9HSBSPhwsYEP/PI7rOvaSD5jCkhxLDCZNKTf4lh74DQ4CsdbuQdEjfGORVLZTPvsoikzs=) 2026-03-09 00:21:28.600236 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDBm3qYNCZhk7wfkqruJ59UEd8AqnkGpTDqrZTabCWN9YoEwRyvo0Lz/zIrFI2rV7OHa3HInpTGY/RrSnjurwqg=) 2026-03-09 00:21:28.600360 | orchestrator | 2026-03-09 00:21:28.600384 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:21:28.600402 | orchestrator | Monday 09 March 2026 00:21:24 +0000 (0:00:01.045) 0:00:22.687 ********** 2026-03-09 00:21:28.600419 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPwIXKgdlI+iBFw55xGBqq0zejzlGl9oEQxFAWL6UD3MQ0C0S0pCOnk+yrY2DqYZj5iVviq5DD73Q06qlHP38io=) 2026-03-09 00:21:28.600442 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCV1jBW6S20a/T7cAoiKpPYnKDfpPQArv/Z/1F4cUWoskjKkQe64K9DN8VdahER7QGbbmB1UZixGugb+JA3fclaA6YWG8tJGfML+DwagDamv1RPR4fOcr5wYxJ6hc94+b+Qy09AcCVIYnJs3mHHmdPQQSgkRG0Gnufmy/HpZ/l5XIo4bMVD+Ni6DmdK3Shv2jHF8JaEF+5ygvv/e9Vz409600EuCAOAsRyTWtOeRZaP/Tftjfy3Oe79vIP92/DgXc5tuXLF2qCXSNOOleONCB6XnUQ4Hck6/rvAqgvZMcKNGUfUJ1o3vl+fXpMHzeTk7rOowpUjH2RBnGSTb3S/r3JN7rxl9UTETb7SImLhWLn3Zeh6CfzSkshg0XfsCluAsZyQNozSVcfSJqS2YbtAfyJuR1jEmbgZGbWapcTWnVDZwiLxDw9URd8guAl3IuOvaPNW7W6/Xq6iKY/8WioX2xEqIN5axmBcpsFAlIKNm0G97fFMqWlZCFTZbv5DniAl/h8=) 2026-03-09 00:21:28.600499 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEUKHUkEsMHqlB/uNsWZ8IxtBnT1a5bdJY2Y7+80KTgv) 2026-03-09 00:21:28.600513 | orchestrator | 2026-03-09 00:21:28.600524 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:21:28.600535 | orchestrator | Monday 09 March 2026 00:21:25 +0000 (0:00:01.057) 0:00:23.744 ********** 2026-03-09 00:21:28.600562 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI94H5VpOx+Da2H49wgVOJPgbpF0N8Pn+dIVr5meu0FC1S9vXs65c8Wkw2uXl7DRB4cq+4xi6dhrw4x1dQ84d5s=) 2026-03-09 00:21:28.600575 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2VBJZR5FIY17/Qlw6V+PIc8AvAhqVJCWHe8KmEDAVS9KXgViC6e0lYvtCbAgDMX78TR4TGPdOrmKyE9VlnFXIMNVW4UNBXq2ES6906ap8iXSQPZDHofCEPFW9z07mdcAZkWNK1FaSh/hBr1Fr/FkMHhEmk7LDRl0pgufkWiB2pH7HC5Vtvh6sqgXW/nJzZbvb482bOfVHNe1iCpDbd1RA4WOif+lKJxGXzgYujT+exY3rcWQ0EAknBJ1wFrahvIvHRkOFmAHaD/8/NjxTcMPaJmIG+DDYL0OAuTMcx8NXXcj2Ub2xnGjugi32DTx/s7vknkBgV+3oJ/E4kyfeUUTVKNuS/OL//bZfDO2d/+sSFVaAsBF85Zbch87aeAwRuTdZVvh1lAx4NDS88dT68HhTJB+Fyek1INBJnaospWe5x5o6pIjA3hGbHYN3YEtJ99tgcJ/bgmdGBaZlijOi2bK60mnXj2pSdmyVr7f+/r4tyTJgBBgGnklVvwTrYfEyYyU=) 2026-03-09 00:21:28.600586 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJGctnS9c4NDVXmdgL84aSyMKjz1nfXflBUqfN1PZk3F) 2026-03-09 00:21:28.600597 | orchestrator | 2026-03-09 00:21:28.600608 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-09 00:21:28.600619 | orchestrator | Monday 09 March 2026 00:21:26 +0000 (0:00:01.066) 0:00:24.811 ********** 2026-03-09 00:21:28.600630 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZuqllRFJPoGsKz9CPk4qxyDq+funXyJ+CHJ2hb15Qz7vElhSAXiNcB9PWu/MUPohcs1na8nhrX/9xb2rLcOJoT6EeD1EZvJhIv7VBkLUWE8ykYs6kD2wqMJzVhjArdbP8Sp4EuNrGBo1G1wAURAeA5L3aa1ptcCWlLlUvlm1dBq+N5ZrzPc+VR0UVrmj3ka9/lPbXS36cFHrX2k/OuXUvyJCLoFfnSFfz7Eidunb6pghZ/6VgoeQ4R02nIlxBiRyKC+RJdcfUr6psKyd2dwnzn6Z4QdG8br+aXw8ECyYKePh3DNncl5wq1Qtt1DRDryWhLU6riD277jlkaIM2tcGQvfBHcdCGwCWE+W/ikiDwPKD6EengcvmnW4DTMIfc7FBOjIk7P9XDT5ft+tY6LHiWio9zI9UJd7Ux5Yh7pSqFKXRE/5QGfPRqFau+7lC6Pfff0mpcGnfsD4c8T6a7kx+JiygihkBus4J8NVJrJaVy0zBtbfmEcHfzdxf1Q87d97M=) 2026-03-09 00:21:28.600642 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPFg2kDCYAkSRh5LAqUqG66hYEwotQRkaUu7Iqz6YPbzQ4kA6/GcIAeXQ9wYz+Gj2WDS5HaRLqD7lrIG0MyIq4A=) 2026-03-09 00:21:28.600709 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEUC/f+M/b6WK36u+Bk5+9nqrp/n4G6imOAS5QZKctxD) 2026-03-09 00:21:28.600723 | orchestrator | 2026-03-09 00:21:28.600734 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-09 00:21:28.600745 | orchestrator | Monday 09 March 2026 00:21:27 +0000 (0:00:01.048) 0:00:25.860 ********** 2026-03-09 00:21:28.600756 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-09 00:21:28.600770 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-09 00:21:28.600783 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-09 00:21:28.600796 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-09 00:21:28.600829 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-09 00:21:28.600842 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-09 00:21:28.600855 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-09 00:21:28.600868 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:21:28.600882 | orchestrator | 2026-03-09 00:21:28.600895 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-09 00:21:28.600908 | orchestrator | Monday 09 March 2026 00:21:27 +0000 (0:00:00.168) 0:00:26.028 ********** 2026-03-09 00:21:28.600921 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:21:28.600940 | orchestrator | 2026-03-09 00:21:28.600951 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-09 00:21:28.600962 | orchestrator | Monday 09 March 2026 00:21:27 +0000 (0:00:00.082) 0:00:26.110 ********** 2026-03-09 00:21:28.600973 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:21:28.600984 | orchestrator | 2026-03-09 00:21:28.600995 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-09 00:21:28.601005 | orchestrator | Monday 09 March 2026 00:21:27 +0000 (0:00:00.051) 0:00:26.161 ********** 2026-03-09 00:21:28.601016 | orchestrator | changed: [testbed-manager] 2026-03-09 00:21:28.601026 | orchestrator | 2026-03-09 00:21:28.601037 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:21:28.601048 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-09 00:21:28.601060 | orchestrator | 2026-03-09 00:21:28.601071 | orchestrator | 2026-03-09 00:21:28.601082 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:21:28.601093 | orchestrator | Monday 09 March 2026 00:21:28 +0000 (0:00:00.786) 0:00:26.948 ********** 2026-03-09 00:21:28.601103 | orchestrator | =============================================================================== 2026-03-09 00:21:28.601114 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.57s 2026-03-09 00:21:28.601125 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.20s 2026-03-09 00:21:28.601136 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-03-09 00:21:28.601147 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-03-09 00:21:28.601158 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-03-09 00:21:28.601169 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-09 00:21:28.601179 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-09 00:21:28.601190 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-09 00:21:28.601201 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-09 00:21:28.601212 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-09 00:21:28.601223 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-09 00:21:28.601233 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-09 00:21:28.601244 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-09 00:21:28.601262 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-09 00:21:28.601273 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-09 00:21:28.601284 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-03-09 00:21:28.601295 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.79s 2026-03-09 00:21:28.601306 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-03-09 00:21:28.601316 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2026-03-09 00:21:28.601327 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-03-09 00:21:28.886296 | orchestrator | + osism apply squid 2026-03-09 00:21:40.872850 | orchestrator | 2026-03-09 00:21:40 | INFO  | Prepare task for execution of squid. 2026-03-09 00:21:40.960892 | orchestrator | 2026-03-09 00:21:40 | INFO  | Task 2396d360-d500-414f-8409-2d8a1a76fc96 (squid) was prepared for execution. 2026-03-09 00:21:40.960987 | orchestrator | 2026-03-09 00:21:40 | INFO  | It takes a moment until task 2396d360-d500-414f-8409-2d8a1a76fc96 (squid) has been started and output is visible here. 2026-03-09 00:23:37.553112 | orchestrator | 2026-03-09 00:23:37.553218 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-09 00:23:37.553235 | orchestrator | 2026-03-09 00:23:37.553247 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-09 00:23:37.553259 | orchestrator | Monday 09 March 2026 00:21:45 +0000 (0:00:00.173) 0:00:00.173 ********** 2026-03-09 00:23:37.553269 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-09 00:23:37.553280 | orchestrator | 2026-03-09 00:23:37.553290 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-09 00:23:37.553300 | orchestrator | Monday 09 March 2026 00:21:45 +0000 (0:00:00.081) 0:00:00.255 ********** 2026-03-09 00:23:37.553310 | orchestrator | ok: [testbed-manager] 2026-03-09 00:23:37.553321 | orchestrator | 2026-03-09 00:23:37.553331 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-09 00:23:37.553341 | orchestrator | Monday 09 March 2026 00:21:46 +0000 (0:00:01.374) 0:00:01.630 ********** 2026-03-09 00:23:37.553351 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-09 00:23:37.553361 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-09 00:23:37.553370 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-09 00:23:37.553398 | orchestrator | 2026-03-09 00:23:37.553419 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-09 00:23:37.553430 | orchestrator | Monday 09 March 2026 00:21:47 +0000 (0:00:01.120) 0:00:02.750 ********** 2026-03-09 00:23:37.553440 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-09 00:23:37.553450 | orchestrator | 2026-03-09 00:23:37.553460 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-09 00:23:37.553470 | orchestrator | Monday 09 March 2026 00:21:48 +0000 (0:00:01.066) 0:00:03.817 ********** 2026-03-09 00:23:37.553480 | orchestrator | ok: [testbed-manager] 2026-03-09 00:23:37.553490 | orchestrator | 2026-03-09 00:23:37.553500 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-09 00:23:37.553509 | orchestrator | Monday 09 March 2026 00:21:49 +0000 (0:00:00.381) 0:00:04.199 ********** 2026-03-09 00:23:37.553519 | orchestrator | changed: [testbed-manager] 2026-03-09 00:23:37.553529 | orchestrator | 2026-03-09 00:23:37.553539 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-09 00:23:37.553549 | orchestrator | Monday 09 March 2026 00:21:49 +0000 (0:00:00.884) 0:00:05.083 ********** 2026-03-09 00:23:37.553559 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-09 00:23:37.553570 | orchestrator | ok: [testbed-manager] 2026-03-09 00:23:37.553580 | orchestrator | 2026-03-09 00:23:37.553590 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-09 00:23:37.553600 | orchestrator | Monday 09 March 2026 00:22:24 +0000 (0:00:34.590) 0:00:39.674 ********** 2026-03-09 00:23:37.553610 | orchestrator | changed: [testbed-manager] 2026-03-09 00:23:37.553619 | orchestrator | 2026-03-09 00:23:37.553629 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-09 00:23:37.553639 | orchestrator | Monday 09 March 2026 00:22:36 +0000 (0:00:12.047) 0:00:51.721 ********** 2026-03-09 00:23:37.553649 | orchestrator | Pausing for 60 seconds 2026-03-09 00:23:37.553660 | orchestrator | changed: [testbed-manager] 2026-03-09 00:23:37.553672 | orchestrator | 2026-03-09 00:23:37.553684 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-09 00:23:37.553696 | orchestrator | Monday 09 March 2026 00:23:36 +0000 (0:01:00.081) 0:01:51.803 ********** 2026-03-09 00:23:37.553726 | orchestrator | ok: [testbed-manager] 2026-03-09 00:23:37.553738 | orchestrator | 2026-03-09 00:23:37.553749 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-09 00:23:37.553761 | orchestrator | Monday 09 March 2026 00:23:36 +0000 (0:00:00.054) 0:01:51.857 ********** 2026-03-09 00:23:37.553798 | orchestrator | changed: [testbed-manager] 2026-03-09 00:23:37.553810 | orchestrator | 2026-03-09 00:23:37.553821 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:23:37.553833 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:23:37.553844 | orchestrator | 2026-03-09 00:23:37.553856 | orchestrator | 2026-03-09 00:23:37.553868 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:23:37.553879 | orchestrator | Monday 09 March 2026 00:23:37 +0000 (0:00:00.570) 0:01:52.428 ********** 2026-03-09 00:23:37.553889 | orchestrator | =============================================================================== 2026-03-09 00:23:37.553899 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-03-09 00:23:37.553909 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 34.59s 2026-03-09 00:23:37.553919 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.05s 2026-03-09 00:23:37.553929 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.37s 2026-03-09 00:23:37.553938 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.12s 2026-03-09 00:23:37.553948 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.07s 2026-03-09 00:23:37.553958 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.88s 2026-03-09 00:23:37.553968 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.57s 2026-03-09 00:23:37.553977 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2026-03-09 00:23:37.553987 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-03-09 00:23:37.553997 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.05s 2026-03-09 00:23:37.820012 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-09 00:23:37.820107 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-03-09 00:23:37.823528 | orchestrator | + set -e 2026-03-09 00:23:37.823676 | orchestrator | + NAMESPACE=kolla 2026-03-09 00:23:37.823695 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-09 00:23:37.829055 | orchestrator | ++ semver latest 9.0.0 2026-03-09 00:23:37.879444 | orchestrator | + [[ -1 -lt 0 ]] 2026-03-09 00:23:37.879541 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-09 00:23:37.880200 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-09 00:23:49.911545 | orchestrator | 2026-03-09 00:23:49 | INFO  | Prepare task for execution of operator. 2026-03-09 00:23:49.978181 | orchestrator | 2026-03-09 00:23:49 | INFO  | Task 58579311-c853-4459-bcf9-b4b27b19f7f2 (operator) was prepared for execution. 2026-03-09 00:23:49.978272 | orchestrator | 2026-03-09 00:23:49 | INFO  | It takes a moment until task 58579311-c853-4459-bcf9-b4b27b19f7f2 (operator) has been started and output is visible here. 2026-03-09 00:24:06.414546 | orchestrator | 2026-03-09 00:24:06.414650 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-09 00:24:06.414665 | orchestrator | 2026-03-09 00:24:06.414675 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 00:24:06.414685 | orchestrator | Monday 09 March 2026 00:23:53 +0000 (0:00:00.103) 0:00:00.103 ********** 2026-03-09 00:24:06.414695 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:24:06.414755 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:24:06.414767 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:24:06.414777 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:24:06.414787 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:24:06.414796 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:24:06.414809 | orchestrator | 2026-03-09 00:24:06.414819 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-09 00:24:06.414829 | orchestrator | Monday 09 March 2026 00:23:58 +0000 (0:00:04.274) 0:00:04.377 ********** 2026-03-09 00:24:06.414859 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:24:06.414869 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:24:06.414878 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:24:06.414887 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:24:06.414897 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:24:06.414906 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:24:06.414915 | orchestrator | 2026-03-09 00:24:06.414925 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-09 00:24:06.414934 | orchestrator | 2026-03-09 00:24:06.414943 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-09 00:24:06.414953 | orchestrator | Monday 09 March 2026 00:23:58 +0000 (0:00:00.782) 0:00:05.160 ********** 2026-03-09 00:24:06.414962 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:24:06.414972 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:24:06.414981 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:24:06.414990 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:24:06.414999 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:24:06.415008 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:24:06.415018 | orchestrator | 2026-03-09 00:24:06.415027 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-09 00:24:06.415037 | orchestrator | Monday 09 March 2026 00:23:58 +0000 (0:00:00.158) 0:00:05.318 ********** 2026-03-09 00:24:06.415046 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:24:06.415057 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:24:06.415069 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:24:06.415096 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:24:06.415108 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:24:06.415120 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:24:06.415132 | orchestrator | 2026-03-09 00:24:06.415148 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-09 00:24:06.415159 | orchestrator | Monday 09 March 2026 00:23:59 +0000 (0:00:00.153) 0:00:05.472 ********** 2026-03-09 00:24:06.415171 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:24:06.415183 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:24:06.415194 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:24:06.415205 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:24:06.415217 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:24:06.415227 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:24:06.415238 | orchestrator | 2026-03-09 00:24:06.415250 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-09 00:24:06.415261 | orchestrator | Monday 09 March 2026 00:23:59 +0000 (0:00:00.676) 0:00:06.149 ********** 2026-03-09 00:24:06.415272 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:24:06.415283 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:24:06.415294 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:24:06.415305 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:24:06.415316 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:24:06.415327 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:24:06.415339 | orchestrator | 2026-03-09 00:24:06.415350 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-09 00:24:06.415362 | orchestrator | Monday 09 March 2026 00:24:00 +0000 (0:00:00.806) 0:00:06.955 ********** 2026-03-09 00:24:06.415374 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-09 00:24:06.415385 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-09 00:24:06.415397 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-09 00:24:06.415409 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-09 00:24:06.415418 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-09 00:24:06.415427 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-09 00:24:06.415450 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-09 00:24:06.415460 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-09 00:24:06.415478 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-09 00:24:06.415488 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-09 00:24:06.415504 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-09 00:24:06.415513 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-09 00:24:06.415523 | orchestrator | 2026-03-09 00:24:06.415532 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-09 00:24:06.415542 | orchestrator | Monday 09 March 2026 00:24:01 +0000 (0:00:01.276) 0:00:08.231 ********** 2026-03-09 00:24:06.415551 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:24:06.415561 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:24:06.415570 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:24:06.415579 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:24:06.415589 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:24:06.415598 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:24:06.415607 | orchestrator | 2026-03-09 00:24:06.415617 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-09 00:24:06.415627 | orchestrator | Monday 09 March 2026 00:24:03 +0000 (0:00:01.181) 0:00:09.413 ********** 2026-03-09 00:24:06.415636 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-09 00:24:06.415646 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-09 00:24:06.415656 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-09 00:24:06.415665 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-09 00:24:06.415675 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-09 00:24:06.415700 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-09 00:24:06.415726 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-09 00:24:06.415736 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-09 00:24:06.415745 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-09 00:24:06.415755 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-09 00:24:06.415764 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-09 00:24:06.415773 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-09 00:24:06.415783 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-09 00:24:06.415792 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-09 00:24:06.415801 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-09 00:24:06.415811 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-09 00:24:06.415820 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-09 00:24:06.415829 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-09 00:24:06.415839 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-09 00:24:06.415848 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-09 00:24:06.415857 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-09 00:24:06.415867 | orchestrator | 2026-03-09 00:24:06.415876 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-09 00:24:06.415887 | orchestrator | Monday 09 March 2026 00:24:04 +0000 (0:00:01.252) 0:00:10.665 ********** 2026-03-09 00:24:06.415896 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:24:06.415905 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:24:06.415915 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:24:06.415924 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:24:06.415933 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:24:06.415943 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:24:06.415952 | orchestrator | 2026-03-09 00:24:06.415962 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-09 00:24:06.415971 | orchestrator | Monday 09 March 2026 00:24:04 +0000 (0:00:00.142) 0:00:10.808 ********** 2026-03-09 00:24:06.415987 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:24:06.415996 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:24:06.416006 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:24:06.416015 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:24:06.416025 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:24:06.416034 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:24:06.416043 | orchestrator | 2026-03-09 00:24:06.416053 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-09 00:24:06.416062 | orchestrator | Monday 09 March 2026 00:24:04 +0000 (0:00:00.182) 0:00:10.990 ********** 2026-03-09 00:24:06.416072 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:24:06.416081 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:24:06.416090 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:24:06.416100 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:24:06.416109 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:24:06.416118 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:24:06.416128 | orchestrator | 2026-03-09 00:24:06.416137 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-09 00:24:06.416147 | orchestrator | Monday 09 March 2026 00:24:05 +0000 (0:00:00.561) 0:00:11.552 ********** 2026-03-09 00:24:06.416156 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:24:06.416165 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:24:06.416174 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:24:06.416184 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:24:06.416193 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:24:06.416202 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:24:06.416212 | orchestrator | 2026-03-09 00:24:06.416221 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-09 00:24:06.416231 | orchestrator | Monday 09 March 2026 00:24:05 +0000 (0:00:00.150) 0:00:11.702 ********** 2026-03-09 00:24:06.416240 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-09 00:24:06.416250 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-09 00:24:06.416259 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:24:06.416268 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 00:24:06.416277 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:24:06.416287 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:24:06.416296 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-09 00:24:06.416305 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:24:06.416315 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-09 00:24:06.416324 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:24:06.416333 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-09 00:24:06.416343 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:24:06.416352 | orchestrator | 2026-03-09 00:24:06.416362 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-09 00:24:06.416371 | orchestrator | Monday 09 March 2026 00:24:06 +0000 (0:00:00.788) 0:00:12.491 ********** 2026-03-09 00:24:06.416380 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:24:06.416390 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:24:06.416399 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:24:06.416408 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:24:06.416418 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:24:06.416427 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:24:06.416436 | orchestrator | 2026-03-09 00:24:06.416446 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-09 00:24:06.416455 | orchestrator | Monday 09 March 2026 00:24:06 +0000 (0:00:00.157) 0:00:12.648 ********** 2026-03-09 00:24:06.416465 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:24:06.416474 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:24:06.416483 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:24:06.416493 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:24:06.416508 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:24:07.664506 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:24:07.664614 | orchestrator | 2026-03-09 00:24:07.664635 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-09 00:24:07.664650 | orchestrator | Monday 09 March 2026 00:24:06 +0000 (0:00:00.128) 0:00:12.776 ********** 2026-03-09 00:24:07.664664 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:24:07.664677 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:24:07.664689 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:24:07.664703 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:24:07.664770 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:24:07.664783 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:24:07.664796 | orchestrator | 2026-03-09 00:24:07.664808 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-09 00:24:07.664821 | orchestrator | Monday 09 March 2026 00:24:06 +0000 (0:00:00.152) 0:00:12.929 ********** 2026-03-09 00:24:07.664834 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:24:07.664847 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:24:07.664860 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:24:07.664873 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:24:07.664887 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:24:07.664900 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:24:07.664914 | orchestrator | 2026-03-09 00:24:07.664927 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-09 00:24:07.664940 | orchestrator | Monday 09 March 2026 00:24:07 +0000 (0:00:00.655) 0:00:13.585 ********** 2026-03-09 00:24:07.664952 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:24:07.664964 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:24:07.664977 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:24:07.664989 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:24:07.665002 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:24:07.665015 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:24:07.665028 | orchestrator | 2026-03-09 00:24:07.665041 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:24:07.665057 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 00:24:07.665101 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 00:24:07.665116 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 00:24:07.665130 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 00:24:07.665144 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 00:24:07.665158 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 00:24:07.665172 | orchestrator | 2026-03-09 00:24:07.665184 | orchestrator | 2026-03-09 00:24:07.665198 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:24:07.665213 | orchestrator | Monday 09 March 2026 00:24:07 +0000 (0:00:00.218) 0:00:13.803 ********** 2026-03-09 00:24:07.665226 | orchestrator | =============================================================================== 2026-03-09 00:24:07.665239 | orchestrator | Gathering Facts --------------------------------------------------------- 4.27s 2026-03-09 00:24:07.665252 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.28s 2026-03-09 00:24:07.665265 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.25s 2026-03-09 00:24:07.665279 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.18s 2026-03-09 00:24:07.665319 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.81s 2026-03-09 00:24:07.665331 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.79s 2026-03-09 00:24:07.665343 | orchestrator | Do not require tty for all users ---------------------------------------- 0.78s 2026-03-09 00:24:07.665356 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.68s 2026-03-09 00:24:07.665370 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.66s 2026-03-09 00:24:07.665383 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.56s 2026-03-09 00:24:07.665395 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2026-03-09 00:24:07.665407 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.18s 2026-03-09 00:24:07.665420 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2026-03-09 00:24:07.665432 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2026-03-09 00:24:07.665445 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2026-03-09 00:24:07.665457 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2026-03-09 00:24:07.665469 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.15s 2026-03-09 00:24:07.665481 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.14s 2026-03-09 00:24:07.665494 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.13s 2026-03-09 00:24:07.942868 | orchestrator | + osism apply --environment custom facts 2026-03-09 00:24:09.775547 | orchestrator | 2026-03-09 00:24:09 | INFO  | Trying to run play facts in environment custom 2026-03-09 00:24:19.782466 | orchestrator | 2026-03-09 00:24:19 | INFO  | Prepare task for execution of facts. 2026-03-09 00:24:19.854642 | orchestrator | 2026-03-09 00:24:19 | INFO  | Task e3c7900a-8917-4e82-bb77-c82d6eab8533 (facts) was prepared for execution. 2026-03-09 00:24:19.854814 | orchestrator | 2026-03-09 00:24:19 | INFO  | It takes a moment until task e3c7900a-8917-4e82-bb77-c82d6eab8533 (facts) has been started and output is visible here. 2026-03-09 00:25:04.873367 | orchestrator | 2026-03-09 00:25:04.873497 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-09 00:25:04.873525 | orchestrator | 2026-03-09 00:25:04.873583 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-09 00:25:04.873605 | orchestrator | Monday 09 March 2026 00:24:23 +0000 (0:00:00.069) 0:00:00.069 ********** 2026-03-09 00:25:04.873625 | orchestrator | ok: [testbed-manager] 2026-03-09 00:25:04.873645 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:25:04.873664 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:25:04.873683 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:25:04.873695 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:25:04.873706 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:25:04.873716 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:25:04.873771 | orchestrator | 2026-03-09 00:25:04.873786 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-09 00:25:04.873797 | orchestrator | Monday 09 March 2026 00:24:25 +0000 (0:00:01.342) 0:00:01.412 ********** 2026-03-09 00:25:04.873808 | orchestrator | ok: [testbed-manager] 2026-03-09 00:25:04.873819 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:25:04.873830 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:25:04.873841 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:25:04.873852 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:25:04.873864 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:25:04.873874 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:25:04.873885 | orchestrator | 2026-03-09 00:25:04.873896 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-09 00:25:04.873938 | orchestrator | 2026-03-09 00:25:04.873966 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-09 00:25:04.873979 | orchestrator | Monday 09 March 2026 00:24:26 +0000 (0:00:01.206) 0:00:02.618 ********** 2026-03-09 00:25:04.873993 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:04.874005 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:04.874071 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:04.874086 | orchestrator | 2026-03-09 00:25:04.874099 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-09 00:25:04.874112 | orchestrator | Monday 09 March 2026 00:24:26 +0000 (0:00:00.086) 0:00:02.704 ********** 2026-03-09 00:25:04.874124 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:04.874136 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:04.874148 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:04.874161 | orchestrator | 2026-03-09 00:25:04.874174 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-09 00:25:04.874186 | orchestrator | Monday 09 March 2026 00:24:26 +0000 (0:00:00.189) 0:00:02.893 ********** 2026-03-09 00:25:04.874199 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:04.874211 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:04.874224 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:04.874237 | orchestrator | 2026-03-09 00:25:04.874249 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-09 00:25:04.874262 | orchestrator | Monday 09 March 2026 00:24:26 +0000 (0:00:00.192) 0:00:03.086 ********** 2026-03-09 00:25:04.874273 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:25:04.874286 | orchestrator | 2026-03-09 00:25:04.874296 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-09 00:25:04.874307 | orchestrator | Monday 09 March 2026 00:24:26 +0000 (0:00:00.133) 0:00:03.220 ********** 2026-03-09 00:25:04.874318 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:04.874328 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:04.874339 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:04.874349 | orchestrator | 2026-03-09 00:25:04.874360 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-09 00:25:04.874371 | orchestrator | Monday 09 March 2026 00:24:27 +0000 (0:00:00.445) 0:00:03.665 ********** 2026-03-09 00:25:04.874381 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:25:04.874392 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:25:04.874403 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:25:04.874413 | orchestrator | 2026-03-09 00:25:04.874424 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-09 00:25:04.874435 | orchestrator | Monday 09 March 2026 00:24:27 +0000 (0:00:00.122) 0:00:03.788 ********** 2026-03-09 00:25:04.874445 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:25:04.874456 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:25:04.874466 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:25:04.874477 | orchestrator | 2026-03-09 00:25:04.874487 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-09 00:25:04.874498 | orchestrator | Monday 09 March 2026 00:24:28 +0000 (0:00:01.032) 0:00:04.820 ********** 2026-03-09 00:25:04.874509 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:04.874519 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:04.874530 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:04.874541 | orchestrator | 2026-03-09 00:25:04.874551 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-09 00:25:04.874562 | orchestrator | Monday 09 March 2026 00:24:28 +0000 (0:00:00.440) 0:00:05.261 ********** 2026-03-09 00:25:04.874573 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:25:04.874583 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:25:04.874594 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:25:04.874604 | orchestrator | 2026-03-09 00:25:04.874615 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-09 00:25:04.874635 | orchestrator | Monday 09 March 2026 00:24:29 +0000 (0:00:01.027) 0:00:06.288 ********** 2026-03-09 00:25:04.874645 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:25:04.874656 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:25:04.874666 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:25:04.874677 | orchestrator | 2026-03-09 00:25:04.874688 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-09 00:25:04.874698 | orchestrator | Monday 09 March 2026 00:24:46 +0000 (0:00:16.821) 0:00:23.109 ********** 2026-03-09 00:25:04.874709 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:25:04.874719 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:25:04.874755 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:25:04.874768 | orchestrator | 2026-03-09 00:25:04.874787 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-09 00:25:04.874838 | orchestrator | Monday 09 March 2026 00:24:46 +0000 (0:00:00.104) 0:00:23.213 ********** 2026-03-09 00:25:04.874862 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:25:04.874880 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:25:04.874898 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:25:04.874915 | orchestrator | 2026-03-09 00:25:04.874932 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-09 00:25:04.874949 | orchestrator | Monday 09 March 2026 00:24:55 +0000 (0:00:08.341) 0:00:31.555 ********** 2026-03-09 00:25:04.874967 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:04.874984 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:04.875001 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:04.875019 | orchestrator | 2026-03-09 00:25:04.875037 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-09 00:25:04.875055 | orchestrator | Monday 09 March 2026 00:24:55 +0000 (0:00:00.481) 0:00:32.037 ********** 2026-03-09 00:25:04.875073 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-09 00:25:04.875090 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-09 00:25:04.875107 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-09 00:25:04.875124 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-09 00:25:04.875142 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-09 00:25:04.875161 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-09 00:25:04.875180 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-09 00:25:04.875198 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-09 00:25:04.875217 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-09 00:25:04.875228 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-09 00:25:04.875239 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-09 00:25:04.875249 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-09 00:25:04.875260 | orchestrator | 2026-03-09 00:25:04.875271 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-09 00:25:04.875282 | orchestrator | Monday 09 March 2026 00:24:59 +0000 (0:00:03.784) 0:00:35.822 ********** 2026-03-09 00:25:04.875292 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:04.875303 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:04.875313 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:04.875324 | orchestrator | 2026-03-09 00:25:04.875335 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-09 00:25:04.875345 | orchestrator | 2026-03-09 00:25:04.875356 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-09 00:25:04.875367 | orchestrator | Monday 09 March 2026 00:25:00 +0000 (0:00:01.442) 0:00:37.264 ********** 2026-03-09 00:25:04.875377 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:25:04.875388 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:25:04.875409 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:25:04.875420 | orchestrator | ok: [testbed-manager] 2026-03-09 00:25:04.875475 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:04.875487 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:04.875517 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:04.875529 | orchestrator | 2026-03-09 00:25:04.875551 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:25:04.875563 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:25:04.875574 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:25:04.875586 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:25:04.875597 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:25:04.875609 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:25:04.875620 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:25:04.875637 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:25:04.875662 | orchestrator | 2026-03-09 00:25:04.875686 | orchestrator | 2026-03-09 00:25:04.875705 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:25:04.875724 | orchestrator | Monday 09 March 2026 00:25:04 +0000 (0:00:03.899) 0:00:41.163 ********** 2026-03-09 00:25:04.875767 | orchestrator | =============================================================================== 2026-03-09 00:25:04.875786 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.82s 2026-03-09 00:25:04.875804 | orchestrator | Install required packages (Debian) -------------------------------------- 8.34s 2026-03-09 00:25:04.875823 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.90s 2026-03-09 00:25:04.875843 | orchestrator | Copy fact files --------------------------------------------------------- 3.78s 2026-03-09 00:25:04.875860 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.44s 2026-03-09 00:25:04.875877 | orchestrator | Create custom facts directory ------------------------------------------- 1.34s 2026-03-09 00:25:04.875900 | orchestrator | Copy fact file ---------------------------------------------------------- 1.21s 2026-03-09 00:25:05.061488 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2026-03-09 00:25:05.061574 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.03s 2026-03-09 00:25:05.061585 | orchestrator | Create custom facts directory ------------------------------------------- 0.48s 2026-03-09 00:25:05.061595 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2026-03-09 00:25:05.061604 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.44s 2026-03-09 00:25:05.061612 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.19s 2026-03-09 00:25:05.061621 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2026-03-09 00:25:05.061630 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2026-03-09 00:25:05.061640 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2026-03-09 00:25:05.061648 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-03-09 00:25:05.061657 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2026-03-09 00:25:05.339849 | orchestrator | + osism apply bootstrap 2026-03-09 00:25:17.349613 | orchestrator | 2026-03-09 00:25:17 | INFO  | Prepare task for execution of bootstrap. 2026-03-09 00:25:17.427380 | orchestrator | 2026-03-09 00:25:17 | INFO  | Task 426b8bc9-2335-4a31-8484-7966c328bf57 (bootstrap) was prepared for execution. 2026-03-09 00:25:17.427549 | orchestrator | 2026-03-09 00:25:17 | INFO  | It takes a moment until task 426b8bc9-2335-4a31-8484-7966c328bf57 (bootstrap) has been started and output is visible here. 2026-03-09 00:25:33.144122 | orchestrator | 2026-03-09 00:25:33.144235 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-09 00:25:33.144254 | orchestrator | 2026-03-09 00:25:33.144267 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-09 00:25:33.144278 | orchestrator | Monday 09 March 2026 00:25:21 +0000 (0:00:00.102) 0:00:00.102 ********** 2026-03-09 00:25:33.144290 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:33.144302 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:33.144313 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:33.144324 | orchestrator | ok: [testbed-manager] 2026-03-09 00:25:33.144334 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:25:33.144345 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:25:33.144355 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:25:33.144366 | orchestrator | 2026-03-09 00:25:33.144377 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-09 00:25:33.144388 | orchestrator | 2026-03-09 00:25:33.144399 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-09 00:25:33.144410 | orchestrator | Monday 09 March 2026 00:25:21 +0000 (0:00:00.195) 0:00:00.297 ********** 2026-03-09 00:25:33.144420 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:25:33.144432 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:25:33.144443 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:25:33.144454 | orchestrator | ok: [testbed-manager] 2026-03-09 00:25:33.144464 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:33.144475 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:33.144485 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:33.144496 | orchestrator | 2026-03-09 00:25:33.144507 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-09 00:25:33.144518 | orchestrator | 2026-03-09 00:25:33.144529 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-09 00:25:33.144539 | orchestrator | Monday 09 March 2026 00:25:25 +0000 (0:00:03.696) 0:00:03.993 ********** 2026-03-09 00:25:33.144551 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:25:33.144562 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-09 00:25:33.144573 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:25:33.144583 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-09 00:25:33.144594 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:25:33.144605 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-09 00:25:33.144615 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-09 00:25:33.144626 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-09 00:25:33.144637 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-09 00:25:33.144648 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-09 00:25:33.144661 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-09 00:25:33.144673 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-09 00:25:33.144685 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-09 00:25:33.144698 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-09 00:25:33.144710 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-09 00:25:33.144723 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-09 00:25:33.144759 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-09 00:25:33.144802 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-09 00:25:33.144817 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-09 00:25:33.144828 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-09 00:25:33.144838 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:25:33.144849 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-09 00:25:33.144860 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-09 00:25:33.144871 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:25:33.144882 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-09 00:25:33.144892 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-09 00:25:33.144903 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-09 00:25:33.144913 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-09 00:25:33.144924 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-09 00:25:33.144934 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-09 00:25:33.144957 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-09 00:25:33.144969 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-09 00:25:33.144990 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-09 00:25:33.145002 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-09 00:25:33.145012 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-09 00:25:33.145023 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-09 00:25:33.145034 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-09 00:25:33.145044 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-09 00:25:33.145055 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-09 00:25:33.145065 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-09 00:25:33.145076 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-09 00:25:33.145087 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-09 00:25:33.145098 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-09 00:25:33.145108 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:25:33.145119 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-09 00:25:33.145130 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-09 00:25:33.145158 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-09 00:25:33.145170 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-09 00:25:33.145181 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:25:33.145191 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-09 00:25:33.145202 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:25:33.145212 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-09 00:25:33.145223 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-09 00:25:33.145234 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:25:33.145244 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-09 00:25:33.145255 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:25:33.145266 | orchestrator | 2026-03-09 00:25:33.145277 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-09 00:25:33.145287 | orchestrator | 2026-03-09 00:25:33.145298 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-09 00:25:33.145309 | orchestrator | Monday 09 March 2026 00:25:25 +0000 (0:00:00.416) 0:00:04.410 ********** 2026-03-09 00:25:33.145320 | orchestrator | ok: [testbed-manager] 2026-03-09 00:25:33.145330 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:25:33.145341 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:33.145360 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:25:33.145371 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:33.145381 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:25:33.145392 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:33.145403 | orchestrator | 2026-03-09 00:25:33.145413 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-09 00:25:33.145424 | orchestrator | Monday 09 March 2026 00:25:27 +0000 (0:00:01.265) 0:00:05.676 ********** 2026-03-09 00:25:33.145435 | orchestrator | ok: [testbed-manager] 2026-03-09 00:25:33.145445 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:33.145456 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:33.145466 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:25:33.145477 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:25:33.145488 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:33.145498 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:25:33.145509 | orchestrator | 2026-03-09 00:25:33.145519 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-09 00:25:33.145530 | orchestrator | Monday 09 March 2026 00:25:28 +0000 (0:00:01.334) 0:00:07.010 ********** 2026-03-09 00:25:33.145542 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:25:33.145555 | orchestrator | 2026-03-09 00:25:33.145566 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-09 00:25:33.145577 | orchestrator | Monday 09 March 2026 00:25:28 +0000 (0:00:00.258) 0:00:07.268 ********** 2026-03-09 00:25:33.145587 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:25:33.145598 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:25:33.145609 | orchestrator | changed: [testbed-manager] 2026-03-09 00:25:33.145620 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:25:33.145631 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:25:33.145641 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:25:33.145652 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:25:33.145662 | orchestrator | 2026-03-09 00:25:33.145673 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-09 00:25:33.145684 | orchestrator | Monday 09 March 2026 00:25:30 +0000 (0:00:02.055) 0:00:09.323 ********** 2026-03-09 00:25:33.145695 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:25:33.145707 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:25:33.145719 | orchestrator | 2026-03-09 00:25:33.145729 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-09 00:25:33.145759 | orchestrator | Monday 09 March 2026 00:25:30 +0000 (0:00:00.257) 0:00:09.581 ********** 2026-03-09 00:25:33.145770 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:25:33.145781 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:25:33.145791 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:25:33.145802 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:25:33.145830 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:25:33.145841 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:25:33.145852 | orchestrator | 2026-03-09 00:25:33.145863 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-09 00:25:33.145874 | orchestrator | Monday 09 March 2026 00:25:31 +0000 (0:00:01.045) 0:00:10.627 ********** 2026-03-09 00:25:33.145885 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:25:33.145895 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:25:33.145906 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:25:33.145917 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:25:33.145927 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:25:33.145938 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:25:33.145948 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:25:33.145959 | orchestrator | 2026-03-09 00:25:33.145976 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-09 00:25:33.145987 | orchestrator | Monday 09 March 2026 00:25:32 +0000 (0:00:00.568) 0:00:11.196 ********** 2026-03-09 00:25:33.145998 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:25:33.146008 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:25:33.146080 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:25:33.146098 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:25:33.146109 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:25:33.146119 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:25:33.146130 | orchestrator | ok: [testbed-manager] 2026-03-09 00:25:33.146140 | orchestrator | 2026-03-09 00:25:33.146151 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-09 00:25:33.146163 | orchestrator | Monday 09 March 2026 00:25:33 +0000 (0:00:00.465) 0:00:11.661 ********** 2026-03-09 00:25:33.146174 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:25:33.146184 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:25:33.146203 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:25:44.335821 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:25:44.335955 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:25:44.335983 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:25:44.336005 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:25:44.336023 | orchestrator | 2026-03-09 00:25:44.336036 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-09 00:25:44.336049 | orchestrator | Monday 09 March 2026 00:25:33 +0000 (0:00:00.196) 0:00:11.857 ********** 2026-03-09 00:25:44.336062 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:25:44.336091 | orchestrator | 2026-03-09 00:25:44.336103 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-09 00:25:44.336115 | orchestrator | Monday 09 March 2026 00:25:33 +0000 (0:00:00.282) 0:00:12.140 ********** 2026-03-09 00:25:44.336126 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:25:44.336137 | orchestrator | 2026-03-09 00:25:44.336148 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-09 00:25:44.336164 | orchestrator | Monday 09 March 2026 00:25:33 +0000 (0:00:00.386) 0:00:12.526 ********** 2026-03-09 00:25:44.336183 | orchestrator | ok: [testbed-manager] 2026-03-09 00:25:44.336202 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:25:44.336220 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:25:44.336265 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:44.336287 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:44.336306 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:25:44.336326 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:44.336345 | orchestrator | 2026-03-09 00:25:44.336365 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-09 00:25:44.336379 | orchestrator | Monday 09 March 2026 00:25:35 +0000 (0:00:01.375) 0:00:13.902 ********** 2026-03-09 00:25:44.336393 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:25:44.336406 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:25:44.336419 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:25:44.336432 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:25:44.336445 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:25:44.336457 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:25:44.336470 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:25:44.336484 | orchestrator | 2026-03-09 00:25:44.336496 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-09 00:25:44.336509 | orchestrator | Monday 09 March 2026 00:25:35 +0000 (0:00:00.205) 0:00:14.108 ********** 2026-03-09 00:25:44.336548 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:44.336561 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:44.336573 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:44.336585 | orchestrator | ok: [testbed-manager] 2026-03-09 00:25:44.336598 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:25:44.336610 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:25:44.336622 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:25:44.336633 | orchestrator | 2026-03-09 00:25:44.336643 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-09 00:25:44.336654 | orchestrator | Monday 09 March 2026 00:25:36 +0000 (0:00:00.561) 0:00:14.669 ********** 2026-03-09 00:25:44.336665 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:25:44.336675 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:25:44.336686 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:25:44.336696 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:25:44.336707 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:25:44.336718 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:25:44.336759 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:25:44.336793 | orchestrator | 2026-03-09 00:25:44.336811 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-09 00:25:44.336831 | orchestrator | Monday 09 March 2026 00:25:36 +0000 (0:00:00.214) 0:00:14.884 ********** 2026-03-09 00:25:44.336849 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:25:44.336866 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:25:44.336883 | orchestrator | ok: [testbed-manager] 2026-03-09 00:25:44.336900 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:25:44.336918 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:25:44.336934 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:25:44.336952 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:25:44.336968 | orchestrator | 2026-03-09 00:25:44.336986 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-09 00:25:44.337005 | orchestrator | Monday 09 March 2026 00:25:36 +0000 (0:00:00.532) 0:00:15.417 ********** 2026-03-09 00:25:44.337022 | orchestrator | ok: [testbed-manager] 2026-03-09 00:25:44.337040 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:25:44.337059 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:25:44.337078 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:25:44.337096 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:25:44.337114 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:25:44.337131 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:25:44.337150 | orchestrator | 2026-03-09 00:25:44.337169 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-09 00:25:44.337189 | orchestrator | Monday 09 March 2026 00:25:37 +0000 (0:00:01.180) 0:00:16.597 ********** 2026-03-09 00:25:44.337208 | orchestrator | ok: [testbed-manager] 2026-03-09 00:25:44.337242 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:44.337255 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:25:44.337265 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:25:44.337276 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:44.337287 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:25:44.337298 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:44.337308 | orchestrator | 2026-03-09 00:25:44.337320 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-09 00:25:44.337331 | orchestrator | Monday 09 March 2026 00:25:38 +0000 (0:00:00.977) 0:00:17.575 ********** 2026-03-09 00:25:44.337366 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:25:44.337380 | orchestrator | 2026-03-09 00:25:44.337391 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-09 00:25:44.337402 | orchestrator | Monday 09 March 2026 00:25:39 +0000 (0:00:00.241) 0:00:17.816 ********** 2026-03-09 00:25:44.337427 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:25:44.337439 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:25:44.337450 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:25:44.337460 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:25:44.337471 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:25:44.337482 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:25:44.337493 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:25:44.337504 | orchestrator | 2026-03-09 00:25:44.337515 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-09 00:25:44.337526 | orchestrator | Monday 09 March 2026 00:25:40 +0000 (0:00:01.183) 0:00:18.999 ********** 2026-03-09 00:25:44.337537 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:44.337548 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:44.337558 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:44.337569 | orchestrator | ok: [testbed-manager] 2026-03-09 00:25:44.337580 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:25:44.337591 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:25:44.337601 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:25:44.337612 | orchestrator | 2026-03-09 00:25:44.337624 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-09 00:25:44.337635 | orchestrator | Monday 09 March 2026 00:25:40 +0000 (0:00:00.180) 0:00:19.180 ********** 2026-03-09 00:25:44.337646 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:44.337657 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:44.337667 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:44.337678 | orchestrator | ok: [testbed-manager] 2026-03-09 00:25:44.337689 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:25:44.337700 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:25:44.337710 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:25:44.337721 | orchestrator | 2026-03-09 00:25:44.337732 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-09 00:25:44.337773 | orchestrator | Monday 09 March 2026 00:25:40 +0000 (0:00:00.173) 0:00:19.353 ********** 2026-03-09 00:25:44.337785 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:44.337796 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:44.337806 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:44.337817 | orchestrator | ok: [testbed-manager] 2026-03-09 00:25:44.337847 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:25:44.337858 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:25:44.337868 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:25:44.337879 | orchestrator | 2026-03-09 00:25:44.337890 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-09 00:25:44.337901 | orchestrator | Monday 09 March 2026 00:25:40 +0000 (0:00:00.185) 0:00:19.538 ********** 2026-03-09 00:25:44.337913 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:25:44.337926 | orchestrator | 2026-03-09 00:25:44.337936 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-09 00:25:44.337947 | orchestrator | Monday 09 March 2026 00:25:41 +0000 (0:00:00.225) 0:00:19.764 ********** 2026-03-09 00:25:44.337958 | orchestrator | ok: [testbed-manager] 2026-03-09 00:25:44.337969 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:44.337980 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:44.337991 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:25:44.338001 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:25:44.338012 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:44.338096 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:25:44.338108 | orchestrator | 2026-03-09 00:25:44.338119 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-09 00:25:44.338130 | orchestrator | Monday 09 March 2026 00:25:41 +0000 (0:00:00.551) 0:00:20.315 ********** 2026-03-09 00:25:44.338157 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:25:44.338169 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:25:44.338180 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:25:44.338199 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:25:44.338210 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:25:44.338221 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:25:44.338232 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:25:44.338243 | orchestrator | 2026-03-09 00:25:44.338254 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-09 00:25:44.338265 | orchestrator | Monday 09 March 2026 00:25:41 +0000 (0:00:00.166) 0:00:20.481 ********** 2026-03-09 00:25:44.338276 | orchestrator | ok: [testbed-manager] 2026-03-09 00:25:44.338287 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:44.338298 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:44.338309 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:44.338320 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:25:44.338331 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:25:44.338342 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:25:44.338352 | orchestrator | 2026-03-09 00:25:44.338363 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-09 00:25:44.338375 | orchestrator | Monday 09 March 2026 00:25:42 +0000 (0:00:00.999) 0:00:21.481 ********** 2026-03-09 00:25:44.338386 | orchestrator | ok: [testbed-manager] 2026-03-09 00:25:44.338396 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:44.338407 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:44.338418 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:44.338429 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:25:44.338440 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:25:44.338451 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:25:44.338462 | orchestrator | 2026-03-09 00:25:44.338473 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-09 00:25:44.338484 | orchestrator | Monday 09 March 2026 00:25:43 +0000 (0:00:00.515) 0:00:21.996 ********** 2026-03-09 00:25:44.338496 | orchestrator | ok: [testbed-manager] 2026-03-09 00:25:44.338507 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:25:44.338517 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:25:44.338528 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:25:44.338550 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:26:24.265791 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:26:24.265895 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:26:24.265910 | orchestrator | 2026-03-09 00:26:24.265923 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-09 00:26:24.265936 | orchestrator | Monday 09 March 2026 00:25:44 +0000 (0:00:01.112) 0:00:23.108 ********** 2026-03-09 00:26:24.265947 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:24.265959 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:24.265970 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:24.265981 | orchestrator | changed: [testbed-manager] 2026-03-09 00:26:24.265992 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:26:24.266003 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:26:24.266014 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:26:24.266086 | orchestrator | 2026-03-09 00:26:24.266098 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-09 00:26:24.266109 | orchestrator | Monday 09 March 2026 00:26:00 +0000 (0:00:16.464) 0:00:39.573 ********** 2026-03-09 00:26:24.266121 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:24.266132 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:24.266143 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:24.266153 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:24.266164 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:26:24.266175 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:26:24.266185 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:26:24.266196 | orchestrator | 2026-03-09 00:26:24.266207 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-09 00:26:24.266218 | orchestrator | Monday 09 March 2026 00:26:01 +0000 (0:00:00.162) 0:00:39.735 ********** 2026-03-09 00:26:24.266229 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:24.266239 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:24.266275 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:24.266287 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:24.266300 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:26:24.266312 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:26:24.266325 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:26:24.266337 | orchestrator | 2026-03-09 00:26:24.266350 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-09 00:26:24.266363 | orchestrator | Monday 09 March 2026 00:26:01 +0000 (0:00:00.184) 0:00:39.919 ********** 2026-03-09 00:26:24.266375 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:24.266388 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:24.266400 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:24.266413 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:24.266424 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:26:24.266436 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:26:24.266448 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:26:24.266461 | orchestrator | 2026-03-09 00:26:24.266474 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-09 00:26:24.266487 | orchestrator | Monday 09 March 2026 00:26:01 +0000 (0:00:00.176) 0:00:40.096 ********** 2026-03-09 00:26:24.266501 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:26:24.266516 | orchestrator | 2026-03-09 00:26:24.266529 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-09 00:26:24.266541 | orchestrator | Monday 09 March 2026 00:26:01 +0000 (0:00:00.243) 0:00:40.339 ********** 2026-03-09 00:26:24.266554 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:24.266566 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:24.266596 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:24.266609 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:26:24.266621 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:26:24.266634 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:26:24.266647 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:24.266658 | orchestrator | 2026-03-09 00:26:24.266668 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-09 00:26:24.266679 | orchestrator | Monday 09 March 2026 00:26:03 +0000 (0:00:01.868) 0:00:42.208 ********** 2026-03-09 00:26:24.266690 | orchestrator | changed: [testbed-manager] 2026-03-09 00:26:24.266701 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:26:24.266712 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:26:24.266726 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:26:24.266793 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:26:24.266806 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:26:24.266817 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:26:24.266828 | orchestrator | 2026-03-09 00:26:24.266839 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-09 00:26:24.266850 | orchestrator | Monday 09 March 2026 00:26:04 +0000 (0:00:01.090) 0:00:43.299 ********** 2026-03-09 00:26:24.266861 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:24.266871 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:24.266882 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:24.266892 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:24.266903 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:26:24.266913 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:26:24.266924 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:26:24.266934 | orchestrator | 2026-03-09 00:26:24.266945 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-09 00:26:24.266956 | orchestrator | Monday 09 March 2026 00:26:05 +0000 (0:00:00.821) 0:00:44.120 ********** 2026-03-09 00:26:24.266967 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:26:24.266989 | orchestrator | 2026-03-09 00:26:24.267006 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-09 00:26:24.267017 | orchestrator | Monday 09 March 2026 00:26:05 +0000 (0:00:00.273) 0:00:44.393 ********** 2026-03-09 00:26:24.267028 | orchestrator | changed: [testbed-manager] 2026-03-09 00:26:24.267039 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:26:24.267050 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:26:24.267061 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:26:24.267072 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:26:24.267082 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:26:24.267093 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:26:24.267104 | orchestrator | 2026-03-09 00:26:24.267134 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-09 00:26:24.267145 | orchestrator | Monday 09 March 2026 00:26:06 +0000 (0:00:01.024) 0:00:45.417 ********** 2026-03-09 00:26:24.267156 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:26:24.267167 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:26:24.267178 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:26:24.267188 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:26:24.267199 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:26:24.267209 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:26:24.267220 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:26:24.267231 | orchestrator | 2026-03-09 00:26:24.267242 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-09 00:26:24.267253 | orchestrator | Monday 09 March 2026 00:26:07 +0000 (0:00:00.220) 0:00:45.638 ********** 2026-03-09 00:26:24.267264 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:26:24.267275 | orchestrator | 2026-03-09 00:26:24.267286 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-09 00:26:24.267296 | orchestrator | Monday 09 March 2026 00:26:07 +0000 (0:00:00.312) 0:00:45.950 ********** 2026-03-09 00:26:24.267307 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:24.267318 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:24.267328 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:24.267339 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:26:24.267349 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:26:24.267360 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:24.267371 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:26:24.267381 | orchestrator | 2026-03-09 00:26:24.267392 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-09 00:26:24.267403 | orchestrator | Monday 09 March 2026 00:26:09 +0000 (0:00:01.837) 0:00:47.788 ********** 2026-03-09 00:26:24.267413 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:26:24.267424 | orchestrator | changed: [testbed-manager] 2026-03-09 00:26:24.267435 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:26:24.267446 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:26:24.267456 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:26:24.267467 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:26:24.267477 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:26:24.267488 | orchestrator | 2026-03-09 00:26:24.267499 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-09 00:26:24.267509 | orchestrator | Monday 09 March 2026 00:26:10 +0000 (0:00:01.152) 0:00:48.941 ********** 2026-03-09 00:26:24.267520 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:26:24.267530 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:26:24.267541 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:26:24.267551 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:26:24.267562 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:26:24.267573 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:26:24.267583 | orchestrator | changed: [testbed-manager] 2026-03-09 00:26:24.267594 | orchestrator | 2026-03-09 00:26:24.267612 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-09 00:26:24.267623 | orchestrator | Monday 09 March 2026 00:26:21 +0000 (0:00:11.278) 0:01:00.219 ********** 2026-03-09 00:26:24.267634 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:24.267644 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:24.267655 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:26:24.267665 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:24.267676 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:24.267687 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:26:24.267697 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:26:24.267708 | orchestrator | 2026-03-09 00:26:24.267719 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-09 00:26:24.267729 | orchestrator | Monday 09 March 2026 00:26:22 +0000 (0:00:01.065) 0:01:01.285 ********** 2026-03-09 00:26:24.267763 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:24.267775 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:24.267785 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:24.267796 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:24.267807 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:26:24.267817 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:26:24.267828 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:26:24.267838 | orchestrator | 2026-03-09 00:26:24.267849 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-09 00:26:24.267860 | orchestrator | Monday 09 March 2026 00:26:23 +0000 (0:00:00.870) 0:01:02.156 ********** 2026-03-09 00:26:24.267870 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:24.267881 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:24.267892 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:24.267902 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:24.267913 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:26:24.267923 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:26:24.267933 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:26:24.267944 | orchestrator | 2026-03-09 00:26:24.267955 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-09 00:26:24.267966 | orchestrator | Monday 09 March 2026 00:26:23 +0000 (0:00:00.215) 0:01:02.371 ********** 2026-03-09 00:26:24.267976 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:26:24.267987 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:26:24.267997 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:26:24.268008 | orchestrator | ok: [testbed-manager] 2026-03-09 00:26:24.268019 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:26:24.268029 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:26:24.268040 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:26:24.268050 | orchestrator | 2026-03-09 00:26:24.268066 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-09 00:26:24.268077 | orchestrator | Monday 09 March 2026 00:26:23 +0000 (0:00:00.218) 0:01:02.590 ********** 2026-03-09 00:26:24.268088 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:26:24.268099 | orchestrator | 2026-03-09 00:26:24.268117 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-09 00:28:53.640412 | orchestrator | Monday 09 March 2026 00:26:24 +0000 (0:00:00.303) 0:01:02.893 ********** 2026-03-09 00:28:53.640547 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:53.640573 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:53.640594 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:28:53.640614 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:53.640633 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:53.640652 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:28:53.640671 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:28:53.640691 | orchestrator | 2026-03-09 00:28:53.640704 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-09 00:28:53.640716 | orchestrator | Monday 09 March 2026 00:26:26 +0000 (0:00:01.841) 0:01:04.734 ********** 2026-03-09 00:28:53.640753 | orchestrator | changed: [testbed-manager] 2026-03-09 00:28:53.640766 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:28:53.640777 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:28:53.640788 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:28:53.640798 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:28:53.640809 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:28:53.640820 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:28:53.640830 | orchestrator | 2026-03-09 00:28:53.640842 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-09 00:28:53.640853 | orchestrator | Monday 09 March 2026 00:26:26 +0000 (0:00:00.632) 0:01:05.367 ********** 2026-03-09 00:28:53.640917 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:53.640933 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:53.640946 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:53.640958 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:53.640970 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:28:53.640982 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:28:53.640994 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:28:53.641006 | orchestrator | 2026-03-09 00:28:53.641018 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-09 00:28:53.641031 | orchestrator | Monday 09 March 2026 00:26:26 +0000 (0:00:00.212) 0:01:05.579 ********** 2026-03-09 00:28:53.641043 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:53.641056 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:53.641074 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:53.641093 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:53.641111 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:28:53.641129 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:28:53.641146 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:28:53.641164 | orchestrator | 2026-03-09 00:28:53.641184 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-09 00:28:53.641201 | orchestrator | Monday 09 March 2026 00:26:28 +0000 (0:00:01.224) 0:01:06.804 ********** 2026-03-09 00:28:53.641220 | orchestrator | changed: [testbed-manager] 2026-03-09 00:28:53.641239 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:28:53.641258 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:28:53.641278 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:28:53.641297 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:28:53.641316 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:28:53.641352 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:28:53.641365 | orchestrator | 2026-03-09 00:28:53.641376 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-09 00:28:53.641387 | orchestrator | Monday 09 March 2026 00:26:30 +0000 (0:00:01.965) 0:01:08.770 ********** 2026-03-09 00:28:53.641398 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:53.641409 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:28:53.641419 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:53.641430 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:53.641441 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:28:53.641452 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:53.641463 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:28:53.641473 | orchestrator | 2026-03-09 00:28:53.641484 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-09 00:28:53.641495 | orchestrator | Monday 09 March 2026 00:26:32 +0000 (0:00:02.626) 0:01:11.396 ********** 2026-03-09 00:28:53.641506 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:53.641517 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:28:53.641527 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:53.641538 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:28:53.641548 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:53.641559 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:28:53.641570 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:53.641580 | orchestrator | 2026-03-09 00:28:53.641591 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-09 00:28:53.641614 | orchestrator | Monday 09 March 2026 00:27:11 +0000 (0:00:38.512) 0:01:49.909 ********** 2026-03-09 00:28:53.641625 | orchestrator | changed: [testbed-manager] 2026-03-09 00:28:53.641636 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:28:53.641647 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:28:53.641657 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:28:53.641668 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:28:53.641679 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:28:53.641689 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:28:53.641700 | orchestrator | 2026-03-09 00:28:53.641711 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-09 00:28:53.641721 | orchestrator | Monday 09 March 2026 00:28:38 +0000 (0:01:26.939) 0:03:16.848 ********** 2026-03-09 00:28:53.641732 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:53.641743 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:53.641753 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:28:53.641764 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:28:53.641775 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:28:53.641785 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:53.641796 | orchestrator | ok: [testbed-manager] 2026-03-09 00:28:53.641806 | orchestrator | 2026-03-09 00:28:53.641817 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-09 00:28:53.641829 | orchestrator | Monday 09 March 2026 00:28:40 +0000 (0:00:01.865) 0:03:18.714 ********** 2026-03-09 00:28:53.641840 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:28:53.641850 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:28:53.641861 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:28:53.641906 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:28:53.641917 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:28:53.641927 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:28:53.641938 | orchestrator | changed: [testbed-manager] 2026-03-09 00:28:53.641949 | orchestrator | 2026-03-09 00:28:53.641960 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-09 00:28:53.641971 | orchestrator | Monday 09 March 2026 00:28:52 +0000 (0:00:12.467) 0:03:31.181 ********** 2026-03-09 00:28:53.642078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-09 00:28:53.642107 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-09 00:28:53.642123 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-09 00:28:53.642136 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-09 00:28:53.642148 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-09 00:28:53.642168 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-09 00:28:53.642183 | orchestrator | 2026-03-09 00:28:53.642194 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-09 00:28:53.642205 | orchestrator | Monday 09 March 2026 00:28:52 +0000 (0:00:00.393) 0:03:31.575 ********** 2026-03-09 00:28:53.642216 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-09 00:28:53.642227 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:28:53.642238 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-09 00:28:53.642248 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-09 00:28:53.642259 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:28:53.642270 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-09 00:28:53.642280 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:28:53.642291 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:28:53.642312 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-09 00:28:53.642323 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-09 00:28:53.642334 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-09 00:28:53.642345 | orchestrator | 2026-03-09 00:28:53.642356 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-09 00:28:53.642366 | orchestrator | Monday 09 March 2026 00:28:53 +0000 (0:00:00.633) 0:03:32.208 ********** 2026-03-09 00:28:53.642377 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-09 00:28:53.642394 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-09 00:28:53.642405 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-09 00:28:53.642416 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-09 00:28:53.642427 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-09 00:28:53.642445 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-09 00:29:01.445619 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-09 00:29:01.445748 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-09 00:29:01.445774 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-09 00:29:01.445794 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-09 00:29:01.445813 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-09 00:29:01.445832 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-09 00:29:01.445852 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-09 00:29:01.445901 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-09 00:29:01.445955 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-09 00:29:01.445976 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-09 00:29:01.445995 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-09 00:29:01.446014 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-09 00:29:01.446106 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-09 00:29:01.446127 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-09 00:29:01.446150 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-09 00:29:01.446172 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-09 00:29:01.446195 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-09 00:29:01.446217 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-09 00:29:01.446238 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-09 00:29:01.446261 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:29:01.446285 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-09 00:29:01.446306 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-09 00:29:01.446325 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-09 00:29:01.446349 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-09 00:29:01.446370 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-09 00:29:01.446392 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-09 00:29:01.446415 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-09 00:29:01.446438 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:29:01.446462 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-09 00:29:01.446486 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-09 00:29:01.446506 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-09 00:29:01.446524 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-09 00:29:01.446541 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-09 00:29:01.446558 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-09 00:29:01.446576 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-09 00:29:01.446593 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-09 00:29:01.446610 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:29:01.446627 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:29:01.446661 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-09 00:29:01.446678 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-09 00:29:01.446694 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-09 00:29:01.446713 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-09 00:29:01.446739 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-09 00:29:01.446780 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-09 00:29:01.446797 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-09 00:29:01.446814 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-09 00:29:01.446829 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-09 00:29:01.446845 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-09 00:29:01.446862 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-09 00:29:01.446899 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-09 00:29:01.446916 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-09 00:29:01.446930 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-09 00:29:01.446946 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-09 00:29:01.446961 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-09 00:29:01.446975 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-09 00:29:01.446989 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-09 00:29:01.447004 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-09 00:29:01.447019 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-09 00:29:01.447033 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-09 00:29:01.447048 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-09 00:29:01.447062 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-09 00:29:01.447077 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-09 00:29:01.447092 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-09 00:29:01.447107 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-09 00:29:01.447122 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-09 00:29:01.447138 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-09 00:29:01.447154 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-09 00:29:01.447171 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-09 00:29:01.447189 | orchestrator | 2026-03-09 00:29:01.447207 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-09 00:29:01.447224 | orchestrator | Monday 09 March 2026 00:28:59 +0000 (0:00:05.770) 0:03:37.979 ********** 2026-03-09 00:29:01.447241 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-09 00:29:01.447257 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-09 00:29:01.447274 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-09 00:29:01.447290 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-09 00:29:01.447305 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-09 00:29:01.447335 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-09 00:29:01.447353 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-09 00:29:01.447369 | orchestrator | 2026-03-09 00:29:01.447386 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-09 00:29:01.447403 | orchestrator | Monday 09 March 2026 00:29:00 +0000 (0:00:00.681) 0:03:38.661 ********** 2026-03-09 00:29:01.447421 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-09 00:29:01.447439 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:29:01.447456 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-09 00:29:01.447484 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-09 00:29:01.447503 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:29:01.447520 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:29:01.447537 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-09 00:29:01.447554 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:29:01.447571 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-09 00:29:01.447589 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-09 00:29:01.447629 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-09 00:29:15.575737 | orchestrator | 2026-03-09 00:29:15.575879 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-09 00:29:15.575962 | orchestrator | Monday 09 March 2026 00:29:01 +0000 (0:00:01.439) 0:03:40.101 ********** 2026-03-09 00:29:15.575983 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-09 00:29:15.576004 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:29:15.576024 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-09 00:29:15.576043 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-09 00:29:15.576054 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:29:15.576066 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:29:15.576077 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-09 00:29:15.576088 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:29:15.576099 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-09 00:29:15.576110 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-09 00:29:15.576120 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-09 00:29:15.576131 | orchestrator | 2026-03-09 00:29:15.576142 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-09 00:29:15.576153 | orchestrator | Monday 09 March 2026 00:29:02 +0000 (0:00:00.615) 0:03:40.716 ********** 2026-03-09 00:29:15.576164 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-09 00:29:15.576175 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-09 00:29:15.576186 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:29:15.576197 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:29:15.576207 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-09 00:29:15.576218 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:29:15.576256 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-09 00:29:15.576270 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:29:15.576282 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-09 00:29:15.576295 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-09 00:29:15.576307 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-09 00:29:15.576319 | orchestrator | 2026-03-09 00:29:15.576332 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-09 00:29:15.576344 | orchestrator | Monday 09 March 2026 00:29:03 +0000 (0:00:01.541) 0:03:42.258 ********** 2026-03-09 00:29:15.576356 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:29:15.576368 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:29:15.576382 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:29:15.576394 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:29:15.576406 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:29:15.576419 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:29:15.576432 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:29:15.576444 | orchestrator | 2026-03-09 00:29:15.576456 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-09 00:29:15.576469 | orchestrator | Monday 09 March 2026 00:29:03 +0000 (0:00:00.322) 0:03:42.580 ********** 2026-03-09 00:29:15.576483 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:29:15.576496 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:29:15.576509 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:29:15.576521 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:29:15.576534 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:29:15.576544 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:29:15.576555 | orchestrator | ok: [testbed-manager] 2026-03-09 00:29:15.576566 | orchestrator | 2026-03-09 00:29:15.576577 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-09 00:29:15.576588 | orchestrator | Monday 09 March 2026 00:29:09 +0000 (0:00:06.052) 0:03:48.632 ********** 2026-03-09 00:29:15.576598 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-09 00:29:15.576610 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-09 00:29:15.576620 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:29:15.576632 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-09 00:29:15.576642 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:29:15.576653 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-09 00:29:15.576664 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:29:15.576674 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-09 00:29:15.576685 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:29:15.576696 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-09 00:29:15.576707 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:29:15.576717 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:29:15.576728 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-09 00:29:15.576739 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:29:15.576749 | orchestrator | 2026-03-09 00:29:15.576760 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-09 00:29:15.576771 | orchestrator | Monday 09 March 2026 00:29:10 +0000 (0:00:00.312) 0:03:48.945 ********** 2026-03-09 00:29:15.576782 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-09 00:29:15.576793 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-09 00:29:15.576804 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-09 00:29:15.576861 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-09 00:29:15.576874 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-09 00:29:15.576908 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-09 00:29:15.576920 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-09 00:29:15.576947 | orchestrator | 2026-03-09 00:29:15.576966 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-09 00:29:15.576984 | orchestrator | Monday 09 March 2026 00:29:11 +0000 (0:00:01.054) 0:03:49.999 ********** 2026-03-09 00:29:15.577005 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:29:15.577025 | orchestrator | 2026-03-09 00:29:15.577043 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-09 00:29:15.577054 | orchestrator | Monday 09 March 2026 00:29:11 +0000 (0:00:00.461) 0:03:50.460 ********** 2026-03-09 00:29:15.577065 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:29:15.577075 | orchestrator | ok: [testbed-manager] 2026-03-09 00:29:15.577086 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:29:15.577097 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:29:15.577107 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:29:15.577118 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:29:15.577128 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:29:15.577139 | orchestrator | 2026-03-09 00:29:15.577150 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-09 00:29:15.577160 | orchestrator | Monday 09 March 2026 00:29:13 +0000 (0:00:01.255) 0:03:51.716 ********** 2026-03-09 00:29:15.577171 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:29:15.577182 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:29:15.577192 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:29:15.577203 | orchestrator | ok: [testbed-manager] 2026-03-09 00:29:15.577213 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:29:15.577223 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:29:15.577234 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:29:15.577244 | orchestrator | 2026-03-09 00:29:15.577255 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-09 00:29:15.577284 | orchestrator | Monday 09 March 2026 00:29:13 +0000 (0:00:00.641) 0:03:52.357 ********** 2026-03-09 00:29:15.577295 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:29:15.577306 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:29:15.577317 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:29:15.577327 | orchestrator | changed: [testbed-manager] 2026-03-09 00:29:15.577338 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:29:15.577348 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:29:15.577359 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:29:15.577370 | orchestrator | 2026-03-09 00:29:15.577380 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-09 00:29:15.577391 | orchestrator | Monday 09 March 2026 00:29:14 +0000 (0:00:00.661) 0:03:53.018 ********** 2026-03-09 00:29:15.577402 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:29:15.577413 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:29:15.577423 | orchestrator | ok: [testbed-manager] 2026-03-09 00:29:15.577434 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:29:15.577444 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:29:15.577455 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:29:15.577466 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:29:15.577476 | orchestrator | 2026-03-09 00:29:15.577487 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-09 00:29:15.577498 | orchestrator | Monday 09 March 2026 00:29:15 +0000 (0:00:00.621) 0:03:53.640 ********** 2026-03-09 00:29:15.577513 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773014707.445836, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:29:15.577535 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773014706.5779588, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:29:15.577553 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773014706.0052512, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:29:15.577588 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773014701.1347501, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:29:21.198056 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773014701.430182, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:29:21.198140 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773014712.09364, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:29:21.198150 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773014701.0710661, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:29:21.198157 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:29:21.198180 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:29:21.198197 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:29:21.198203 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:29:21.198223 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:29:21.198229 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:29:21.198235 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 00:29:21.198241 | orchestrator | 2026-03-09 00:29:21.198248 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-09 00:29:21.198255 | orchestrator | Monday 09 March 2026 00:29:16 +0000 (0:00:01.057) 0:03:54.698 ********** 2026-03-09 00:29:21.198261 | orchestrator | changed: [testbed-manager] 2026-03-09 00:29:21.198269 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:29:21.198274 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:29:21.198280 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:29:21.198290 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:29:21.198296 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:29:21.198302 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:29:21.198307 | orchestrator | 2026-03-09 00:29:21.198313 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-09 00:29:21.198319 | orchestrator | Monday 09 March 2026 00:29:17 +0000 (0:00:01.187) 0:03:55.885 ********** 2026-03-09 00:29:21.198325 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:29:21.198331 | orchestrator | changed: [testbed-manager] 2026-03-09 00:29:21.198336 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:29:21.198342 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:29:21.198348 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:29:21.198353 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:29:21.198359 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:29:21.198364 | orchestrator | 2026-03-09 00:29:21.198370 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-09 00:29:21.198376 | orchestrator | Monday 09 March 2026 00:29:18 +0000 (0:00:01.177) 0:03:57.063 ********** 2026-03-09 00:29:21.198382 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:29:21.198387 | orchestrator | changed: [testbed-manager] 2026-03-09 00:29:21.198393 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:29:21.198399 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:29:21.198404 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:29:21.198410 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:29:21.198415 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:29:21.198421 | orchestrator | 2026-03-09 00:29:21.198427 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-09 00:29:21.198433 | orchestrator | Monday 09 March 2026 00:29:19 +0000 (0:00:01.199) 0:03:58.262 ********** 2026-03-09 00:29:21.198439 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:29:21.198444 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:29:21.198450 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:29:21.198459 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:29:21.198464 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:29:21.198470 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:29:21.198475 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:29:21.198481 | orchestrator | 2026-03-09 00:29:21.198487 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-09 00:29:21.198492 | orchestrator | Monday 09 March 2026 00:29:19 +0000 (0:00:00.341) 0:03:58.604 ********** 2026-03-09 00:29:21.198498 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:29:21.198506 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:29:21.198511 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:29:21.198517 | orchestrator | ok: [testbed-manager] 2026-03-09 00:29:21.198522 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:29:21.198528 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:29:21.198534 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:29:21.198539 | orchestrator | 2026-03-09 00:29:21.198545 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-09 00:29:21.198551 | orchestrator | Monday 09 March 2026 00:29:20 +0000 (0:00:00.748) 0:03:59.353 ********** 2026-03-09 00:29:21.198558 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:29:21.198566 | orchestrator | 2026-03-09 00:29:21.198572 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-09 00:29:21.198583 | orchestrator | Monday 09 March 2026 00:29:21 +0000 (0:00:00.472) 0:03:59.825 ********** 2026-03-09 00:30:41.041128 | orchestrator | ok: [testbed-manager] 2026-03-09 00:30:41.041220 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:30:41.041234 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:30:41.041243 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:30:41.041252 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:30:41.041277 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:30:41.041286 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:30:41.041295 | orchestrator | 2026-03-09 00:30:41.041306 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-09 00:30:41.041316 | orchestrator | Monday 09 March 2026 00:29:29 +0000 (0:00:08.238) 0:04:08.063 ********** 2026-03-09 00:30:41.041325 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:30:41.041334 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:30:41.041343 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:30:41.041351 | orchestrator | ok: [testbed-manager] 2026-03-09 00:30:41.041360 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:30:41.041368 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:30:41.041377 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:30:41.041386 | orchestrator | 2026-03-09 00:30:41.041394 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-09 00:30:41.041403 | orchestrator | Monday 09 March 2026 00:29:30 +0000 (0:00:01.292) 0:04:09.356 ********** 2026-03-09 00:30:41.041412 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:30:41.041420 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:30:41.041429 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:30:41.041437 | orchestrator | ok: [testbed-manager] 2026-03-09 00:30:41.041446 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:30:41.041454 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:30:41.041463 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:30:41.041471 | orchestrator | 2026-03-09 00:30:41.041480 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-09 00:30:41.041489 | orchestrator | Monday 09 March 2026 00:29:31 +0000 (0:00:01.018) 0:04:10.375 ********** 2026-03-09 00:30:41.041497 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:30:41.041506 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:30:41.041515 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:30:41.041523 | orchestrator | ok: [testbed-manager] 2026-03-09 00:30:41.041532 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:30:41.041540 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:30:41.041549 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:30:41.041558 | orchestrator | 2026-03-09 00:30:41.041567 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-09 00:30:41.041576 | orchestrator | Monday 09 March 2026 00:29:32 +0000 (0:00:00.298) 0:04:10.674 ********** 2026-03-09 00:30:41.041585 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:30:41.041593 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:30:41.041602 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:30:41.041611 | orchestrator | ok: [testbed-manager] 2026-03-09 00:30:41.041619 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:30:41.041628 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:30:41.041636 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:30:41.041645 | orchestrator | 2026-03-09 00:30:41.041654 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-09 00:30:41.041662 | orchestrator | Monday 09 March 2026 00:29:32 +0000 (0:00:00.312) 0:04:10.986 ********** 2026-03-09 00:30:41.041671 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:30:41.041682 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:30:41.041692 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:30:41.041702 | orchestrator | ok: [testbed-manager] 2026-03-09 00:30:41.041712 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:30:41.041723 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:30:41.041733 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:30:41.041744 | orchestrator | 2026-03-09 00:30:41.041755 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-09 00:30:41.041765 | orchestrator | Monday 09 March 2026 00:29:32 +0000 (0:00:00.305) 0:04:11.292 ********** 2026-03-09 00:30:41.041776 | orchestrator | ok: [testbed-manager] 2026-03-09 00:30:41.041786 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:30:41.041796 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:30:41.041807 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:30:41.041817 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:30:41.041854 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:30:41.041865 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:30:41.041874 | orchestrator | 2026-03-09 00:30:41.041885 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-09 00:30:41.041896 | orchestrator | Monday 09 March 2026 00:29:38 +0000 (0:00:05.576) 0:04:16.869 ********** 2026-03-09 00:30:41.041908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:30:41.041921 | orchestrator | 2026-03-09 00:30:41.041932 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-09 00:30:41.041944 | orchestrator | Monday 09 March 2026 00:29:38 +0000 (0:00:00.449) 0:04:17.319 ********** 2026-03-09 00:30:41.041954 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-09 00:30:41.041965 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-09 00:30:41.041976 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-09 00:30:41.041986 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-09 00:30:41.041996 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:30:41.042007 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-09 00:30:41.042094 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-09 00:30:41.042106 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:30:41.042115 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-09 00:30:41.042123 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:30:41.042160 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-09 00:30:41.042170 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-09 00:30:41.042179 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-09 00:30:41.042187 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:30:41.042196 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:30:41.042205 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-09 00:30:41.042228 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-09 00:30:41.042237 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:30:41.042246 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-09 00:30:41.042255 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-09 00:30:41.042263 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:30:41.042272 | orchestrator | 2026-03-09 00:30:41.042281 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-09 00:30:41.042290 | orchestrator | Monday 09 March 2026 00:29:39 +0000 (0:00:00.358) 0:04:17.677 ********** 2026-03-09 00:30:41.042299 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:30:41.042308 | orchestrator | 2026-03-09 00:30:41.042316 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-09 00:30:41.042325 | orchestrator | Monday 09 March 2026 00:29:39 +0000 (0:00:00.397) 0:04:18.075 ********** 2026-03-09 00:30:41.042334 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-09 00:30:41.042342 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-09 00:30:41.042351 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:30:41.042360 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-09 00:30:41.042368 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:30:41.042377 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:30:41.042385 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-09 00:30:41.042403 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-09 00:30:41.042419 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:30:41.042427 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-09 00:30:41.042436 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:30:41.042444 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:30:41.042453 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-09 00:30:41.042461 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:30:41.042470 | orchestrator | 2026-03-09 00:30:41.042479 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-09 00:30:41.042487 | orchestrator | Monday 09 March 2026 00:29:39 +0000 (0:00:00.340) 0:04:18.416 ********** 2026-03-09 00:30:41.042496 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:30:41.042505 | orchestrator | 2026-03-09 00:30:41.042514 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-09 00:30:41.042522 | orchestrator | Monday 09 March 2026 00:29:40 +0000 (0:00:00.410) 0:04:18.826 ********** 2026-03-09 00:30:41.042531 | orchestrator | changed: [testbed-manager] 2026-03-09 00:30:41.042539 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:30:41.042548 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:30:41.042556 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:30:41.042565 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:30:41.042573 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:30:41.042582 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:30:41.042591 | orchestrator | 2026-03-09 00:30:41.042599 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-09 00:30:41.042608 | orchestrator | Monday 09 March 2026 00:30:15 +0000 (0:00:34.924) 0:04:53.750 ********** 2026-03-09 00:30:41.042616 | orchestrator | changed: [testbed-manager] 2026-03-09 00:30:41.042625 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:30:41.042633 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:30:41.042642 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:30:41.042651 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:30:41.042659 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:30:41.042667 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:30:41.042676 | orchestrator | 2026-03-09 00:30:41.042684 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-09 00:30:41.042693 | orchestrator | Monday 09 March 2026 00:30:24 +0000 (0:00:08.992) 0:05:02.742 ********** 2026-03-09 00:30:41.042705 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:30:41.042714 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:30:41.042723 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:30:41.042731 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:30:41.042740 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:30:41.042748 | orchestrator | changed: [testbed-manager] 2026-03-09 00:30:41.042757 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:30:41.042765 | orchestrator | 2026-03-09 00:30:41.042774 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-09 00:30:41.042783 | orchestrator | Monday 09 March 2026 00:30:32 +0000 (0:00:08.140) 0:05:10.883 ********** 2026-03-09 00:30:41.042791 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:30:41.042800 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:30:41.042808 | orchestrator | ok: [testbed-manager] 2026-03-09 00:30:41.042817 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:30:41.042840 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:30:41.042849 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:30:41.042857 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:30:41.042866 | orchestrator | 2026-03-09 00:30:41.042875 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-09 00:30:41.042883 | orchestrator | Monday 09 March 2026 00:30:34 +0000 (0:00:01.893) 0:05:12.777 ********** 2026-03-09 00:30:41.042897 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:30:41.042906 | orchestrator | changed: [testbed-manager] 2026-03-09 00:30:41.042914 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:30:41.042923 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:30:41.042932 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:30:41.042940 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:30:41.042949 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:30:41.042958 | orchestrator | 2026-03-09 00:30:41.042972 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-09 00:30:52.537253 | orchestrator | Monday 09 March 2026 00:30:41 +0000 (0:00:06.889) 0:05:19.667 ********** 2026-03-09 00:30:52.537396 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:30:52.537429 | orchestrator | 2026-03-09 00:30:52.537450 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-09 00:30:52.537470 | orchestrator | Monday 09 March 2026 00:30:41 +0000 (0:00:00.407) 0:05:20.074 ********** 2026-03-09 00:30:52.537488 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:30:52.537508 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:30:52.537526 | orchestrator | changed: [testbed-manager] 2026-03-09 00:30:52.537544 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:30:52.537562 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:30:52.537579 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:30:52.537598 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:30:52.537615 | orchestrator | 2026-03-09 00:30:52.537631 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-09 00:30:52.537646 | orchestrator | Monday 09 March 2026 00:30:42 +0000 (0:00:00.739) 0:05:20.814 ********** 2026-03-09 00:30:52.537662 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:30:52.537681 | orchestrator | ok: [testbed-manager] 2026-03-09 00:30:52.537699 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:30:52.537716 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:30:52.537732 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:30:52.537749 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:30:52.537767 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:30:52.537787 | orchestrator | 2026-03-09 00:30:52.537841 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-09 00:30:52.537861 | orchestrator | Monday 09 March 2026 00:30:43 +0000 (0:00:01.781) 0:05:22.595 ********** 2026-03-09 00:30:52.537881 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:30:52.537901 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:30:52.537920 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:30:52.537939 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:30:52.537958 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:30:52.537977 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:30:52.537998 | orchestrator | changed: [testbed-manager] 2026-03-09 00:30:52.538089 | orchestrator | 2026-03-09 00:30:52.538107 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-09 00:30:52.538120 | orchestrator | Monday 09 March 2026 00:30:44 +0000 (0:00:00.786) 0:05:23.382 ********** 2026-03-09 00:30:52.538131 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:30:52.538142 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:30:52.538153 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:30:52.538164 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:30:52.538174 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:30:52.538185 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:30:52.538196 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:30:52.538207 | orchestrator | 2026-03-09 00:30:52.538218 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-09 00:30:52.538229 | orchestrator | Monday 09 March 2026 00:30:45 +0000 (0:00:00.288) 0:05:23.671 ********** 2026-03-09 00:30:52.538268 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:30:52.538279 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:30:52.538290 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:30:52.538300 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:30:52.538311 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:30:52.538322 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:30:52.538332 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:30:52.538343 | orchestrator | 2026-03-09 00:30:52.538353 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-09 00:30:52.538364 | orchestrator | Monday 09 March 2026 00:30:45 +0000 (0:00:00.427) 0:05:24.098 ********** 2026-03-09 00:30:52.538375 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:30:52.538386 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:30:52.538397 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:30:52.538407 | orchestrator | ok: [testbed-manager] 2026-03-09 00:30:52.538418 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:30:52.538430 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:30:52.538448 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:30:52.538466 | orchestrator | 2026-03-09 00:30:52.538484 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-09 00:30:52.538519 | orchestrator | Monday 09 March 2026 00:30:45 +0000 (0:00:00.349) 0:05:24.448 ********** 2026-03-09 00:30:52.538536 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:30:52.538547 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:30:52.538557 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:30:52.538568 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:30:52.538578 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:30:52.538589 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:30:52.538599 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:30:52.538610 | orchestrator | 2026-03-09 00:30:52.538621 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-09 00:30:52.538632 | orchestrator | Monday 09 March 2026 00:30:46 +0000 (0:00:00.364) 0:05:24.813 ********** 2026-03-09 00:30:52.538643 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:30:52.538653 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:30:52.538664 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:30:52.538675 | orchestrator | ok: [testbed-manager] 2026-03-09 00:30:52.538685 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:30:52.538695 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:30:52.538706 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:30:52.538716 | orchestrator | 2026-03-09 00:30:52.538727 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-09 00:30:52.538738 | orchestrator | Monday 09 March 2026 00:30:46 +0000 (0:00:00.326) 0:05:25.139 ********** 2026-03-09 00:30:52.538748 | orchestrator | ok: [testbed-node-3] =>  2026-03-09 00:30:52.538759 | orchestrator |  docker_version: 5:27.5.1 2026-03-09 00:30:52.538769 | orchestrator | ok: [testbed-node-4] =>  2026-03-09 00:30:52.538780 | orchestrator |  docker_version: 5:27.5.1 2026-03-09 00:30:52.538790 | orchestrator | ok: [testbed-node-5] =>  2026-03-09 00:30:52.538825 | orchestrator |  docker_version: 5:27.5.1 2026-03-09 00:30:52.538842 | orchestrator | ok: [testbed-manager] =>  2026-03-09 00:30:52.538861 | orchestrator |  docker_version: 5:27.5.1 2026-03-09 00:30:52.538893 | orchestrator | ok: [testbed-node-0] =>  2026-03-09 00:30:52.538905 | orchestrator |  docker_version: 5:27.5.1 2026-03-09 00:30:52.538915 | orchestrator | ok: [testbed-node-1] =>  2026-03-09 00:30:52.538926 | orchestrator |  docker_version: 5:27.5.1 2026-03-09 00:30:52.538936 | orchestrator | ok: [testbed-node-2] =>  2026-03-09 00:30:52.538947 | orchestrator |  docker_version: 5:27.5.1 2026-03-09 00:30:52.538957 | orchestrator | 2026-03-09 00:30:52.538968 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-09 00:30:52.538979 | orchestrator | Monday 09 March 2026 00:30:46 +0000 (0:00:00.342) 0:05:25.482 ********** 2026-03-09 00:30:52.538990 | orchestrator | ok: [testbed-node-3] =>  2026-03-09 00:30:52.539000 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-09 00:30:52.539021 | orchestrator | ok: [testbed-node-4] =>  2026-03-09 00:30:52.539032 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-09 00:30:52.539042 | orchestrator | ok: [testbed-node-5] =>  2026-03-09 00:30:52.539052 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-09 00:30:52.539063 | orchestrator | ok: [testbed-manager] =>  2026-03-09 00:30:52.539073 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-09 00:30:52.539084 | orchestrator | ok: [testbed-node-0] =>  2026-03-09 00:30:52.539094 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-09 00:30:52.539104 | orchestrator | ok: [testbed-node-1] =>  2026-03-09 00:30:52.539115 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-09 00:30:52.539125 | orchestrator | ok: [testbed-node-2] =>  2026-03-09 00:30:52.539135 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-09 00:30:52.539146 | orchestrator | 2026-03-09 00:30:52.539156 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-09 00:30:52.539167 | orchestrator | Monday 09 March 2026 00:30:47 +0000 (0:00:00.350) 0:05:25.833 ********** 2026-03-09 00:30:52.539178 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:30:52.539189 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:30:52.539199 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:30:52.539209 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:30:52.539220 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:30:52.539230 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:30:52.539241 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:30:52.539251 | orchestrator | 2026-03-09 00:30:52.539262 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-09 00:30:52.539272 | orchestrator | Monday 09 March 2026 00:30:47 +0000 (0:00:00.331) 0:05:26.165 ********** 2026-03-09 00:30:52.539283 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:30:52.539293 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:30:52.539303 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:30:52.539314 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:30:52.539324 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:30:52.539335 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:30:52.539345 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:30:52.539355 | orchestrator | 2026-03-09 00:30:52.539366 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-09 00:30:52.539377 | orchestrator | Monday 09 March 2026 00:30:47 +0000 (0:00:00.433) 0:05:26.598 ********** 2026-03-09 00:30:52.539390 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:30:52.539403 | orchestrator | 2026-03-09 00:30:52.539414 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-09 00:30:52.539424 | orchestrator | Monday 09 March 2026 00:30:48 +0000 (0:00:00.477) 0:05:27.075 ********** 2026-03-09 00:30:52.539441 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:30:52.539460 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:30:52.539477 | orchestrator | ok: [testbed-manager] 2026-03-09 00:30:52.539494 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:30:52.539513 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:30:52.539532 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:30:52.539551 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:30:52.539569 | orchestrator | 2026-03-09 00:30:52.539588 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-09 00:30:52.539606 | orchestrator | Monday 09 March 2026 00:30:49 +0000 (0:00:00.830) 0:05:27.906 ********** 2026-03-09 00:30:52.539624 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:30:52.539641 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:30:52.539657 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:30:52.539676 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:30:52.539694 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:30:52.539712 | orchestrator | ok: [testbed-manager] 2026-03-09 00:30:52.539753 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:30:52.539773 | orchestrator | 2026-03-09 00:30:52.539784 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-09 00:30:52.539796 | orchestrator | Monday 09 March 2026 00:30:52 +0000 (0:00:02.879) 0:05:30.786 ********** 2026-03-09 00:30:52.539873 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-09 00:30:52.539886 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-09 00:30:52.539896 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-09 00:30:52.539907 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-09 00:30:52.539918 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-09 00:30:52.539928 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-09 00:30:52.539946 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:30:52.539965 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-09 00:30:52.539989 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-09 00:30:52.540012 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-09 00:30:52.540030 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:30:52.540048 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-09 00:30:52.540066 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-09 00:30:52.540084 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-09 00:30:52.540102 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:30:52.540121 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-09 00:30:52.540154 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-09 00:31:54.204065 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-09 00:31:54.204173 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:31:54.204189 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-09 00:31:54.204201 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-09 00:31:54.204212 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-09 00:31:54.204223 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:31:54.204234 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:31:54.204245 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-09 00:31:54.204256 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-09 00:31:54.204266 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-09 00:31:54.204277 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:31:54.204288 | orchestrator | 2026-03-09 00:31:54.204300 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-09 00:31:54.204312 | orchestrator | Monday 09 March 2026 00:30:52 +0000 (0:00:00.607) 0:05:31.393 ********** 2026-03-09 00:31:54.204323 | orchestrator | ok: [testbed-manager] 2026-03-09 00:31:54.204334 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:31:54.204344 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:31:54.204355 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:31:54.204366 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:31:54.204377 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:31:54.204387 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:31:54.204398 | orchestrator | 2026-03-09 00:31:54.204409 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-09 00:31:54.204420 | orchestrator | Monday 09 March 2026 00:30:59 +0000 (0:00:06.917) 0:05:38.311 ********** 2026-03-09 00:31:54.204431 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:31:54.204441 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:31:54.204452 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:31:54.204463 | orchestrator | ok: [testbed-manager] 2026-03-09 00:31:54.204473 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:31:54.204484 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:31:54.204494 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:31:54.204530 | orchestrator | 2026-03-09 00:31:54.204543 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-09 00:31:54.204581 | orchestrator | Monday 09 March 2026 00:31:00 +0000 (0:00:01.143) 0:05:39.455 ********** 2026-03-09 00:31:54.204592 | orchestrator | ok: [testbed-manager] 2026-03-09 00:31:54.204604 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:31:54.204618 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:31:54.204630 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:31:54.204643 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:31:54.204655 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:31:54.204668 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:31:54.204680 | orchestrator | 2026-03-09 00:31:54.204693 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-09 00:31:54.204706 | orchestrator | Monday 09 March 2026 00:31:09 +0000 (0:00:08.425) 0:05:47.880 ********** 2026-03-09 00:31:54.204719 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:31:54.204732 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:31:54.204744 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:31:54.204756 | orchestrator | changed: [testbed-manager] 2026-03-09 00:31:54.204769 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:31:54.204781 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:31:54.204793 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:31:54.204806 | orchestrator | 2026-03-09 00:31:54.204818 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-09 00:31:54.204831 | orchestrator | Monday 09 March 2026 00:31:12 +0000 (0:00:03.132) 0:05:51.013 ********** 2026-03-09 00:31:54.204843 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:31:54.204856 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:31:54.204868 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:31:54.204881 | orchestrator | ok: [testbed-manager] 2026-03-09 00:31:54.204894 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:31:54.204906 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:31:54.204919 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:31:54.204931 | orchestrator | 2026-03-09 00:31:54.204945 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-09 00:31:54.204959 | orchestrator | Monday 09 March 2026 00:31:13 +0000 (0:00:01.487) 0:05:52.500 ********** 2026-03-09 00:31:54.204970 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:31:54.204981 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:31:54.205007 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:31:54.205018 | orchestrator | ok: [testbed-manager] 2026-03-09 00:31:54.205029 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:31:54.205040 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:31:54.205050 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:31:54.205061 | orchestrator | 2026-03-09 00:31:54.205073 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-09 00:31:54.205084 | orchestrator | Monday 09 March 2026 00:31:15 +0000 (0:00:01.319) 0:05:53.819 ********** 2026-03-09 00:31:54.205095 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:31:54.205105 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:31:54.205188 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:31:54.205206 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:31:54.205224 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:31:54.205242 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:31:54.205268 | orchestrator | changed: [testbed-manager] 2026-03-09 00:31:54.205293 | orchestrator | 2026-03-09 00:31:54.205311 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-09 00:31:54.205329 | orchestrator | Monday 09 March 2026 00:31:16 +0000 (0:00:00.912) 0:05:54.731 ********** 2026-03-09 00:31:54.205347 | orchestrator | ok: [testbed-manager] 2026-03-09 00:31:54.205365 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:31:54.205382 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:31:54.205399 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:31:54.205417 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:31:54.205449 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:31:54.205469 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:31:54.205488 | orchestrator | 2026-03-09 00:31:54.205505 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-09 00:31:54.205545 | orchestrator | Monday 09 March 2026 00:31:26 +0000 (0:00:10.229) 0:06:04.961 ********** 2026-03-09 00:31:54.205590 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:31:54.205608 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:31:54.205626 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:31:54.205643 | orchestrator | changed: [testbed-manager] 2026-03-09 00:31:54.205660 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:31:54.205678 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:31:54.205696 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:31:54.205715 | orchestrator | 2026-03-09 00:31:54.205734 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-09 00:31:54.205751 | orchestrator | Monday 09 March 2026 00:31:27 +0000 (0:00:00.957) 0:06:05.918 ********** 2026-03-09 00:31:54.205770 | orchestrator | ok: [testbed-manager] 2026-03-09 00:31:54.205788 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:31:54.205805 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:31:54.205823 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:31:54.205841 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:31:54.205857 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:31:54.205868 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:31:54.205879 | orchestrator | 2026-03-09 00:31:54.205890 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-09 00:31:54.205901 | orchestrator | Monday 09 March 2026 00:31:36 +0000 (0:00:09.332) 0:06:15.251 ********** 2026-03-09 00:31:54.205912 | orchestrator | ok: [testbed-manager] 2026-03-09 00:31:54.205922 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:31:54.205933 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:31:54.205943 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:31:54.205954 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:31:54.205967 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:31:54.205985 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:31:54.206156 | orchestrator | 2026-03-09 00:31:54.206186 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-09 00:31:54.206203 | orchestrator | Monday 09 March 2026 00:31:47 +0000 (0:00:10.904) 0:06:26.155 ********** 2026-03-09 00:31:54.206222 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-09 00:31:54.206241 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-09 00:31:54.206258 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-09 00:31:54.206277 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-09 00:31:54.206295 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-09 00:31:54.206314 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-09 00:31:54.206335 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-09 00:31:54.206355 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-09 00:31:54.206374 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-09 00:31:54.206394 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-09 00:31:54.206408 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-09 00:31:54.206419 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-09 00:31:54.206429 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-09 00:31:54.206440 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-09 00:31:54.206451 | orchestrator | 2026-03-09 00:31:54.206462 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-09 00:31:54.206473 | orchestrator | Monday 09 March 2026 00:31:48 +0000 (0:00:01.233) 0:06:27.389 ********** 2026-03-09 00:31:54.206483 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:31:54.206494 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:31:54.206518 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:31:54.206613 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:31:54.206630 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:31:54.206641 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:31:54.206652 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:31:54.206662 | orchestrator | 2026-03-09 00:31:54.206673 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-09 00:31:54.206684 | orchestrator | Monday 09 March 2026 00:31:49 +0000 (0:00:00.573) 0:06:27.962 ********** 2026-03-09 00:31:54.206695 | orchestrator | ok: [testbed-manager] 2026-03-09 00:31:54.206706 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:31:54.206717 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:31:54.206727 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:31:54.206737 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:31:54.206748 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:31:54.206759 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:31:54.206769 | orchestrator | 2026-03-09 00:31:54.206781 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-09 00:31:54.206793 | orchestrator | Monday 09 March 2026 00:31:53 +0000 (0:00:03.938) 0:06:31.901 ********** 2026-03-09 00:31:54.206804 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:31:54.206815 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:31:54.206826 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:31:54.206836 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:31:54.206847 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:31:54.206857 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:31:54.206868 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:31:54.206878 | orchestrator | 2026-03-09 00:31:54.206890 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-09 00:31:54.206901 | orchestrator | Monday 09 March 2026 00:31:53 +0000 (0:00:00.659) 0:06:32.561 ********** 2026-03-09 00:31:54.206958 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-09 00:31:54.206970 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-09 00:31:54.206981 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:31:54.206992 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-09 00:31:54.207003 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-09 00:31:54.207013 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:31:54.207024 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-09 00:31:54.207041 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-09 00:31:54.207060 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:31:54.207098 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-09 00:32:13.607854 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-09 00:32:13.607998 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:32:13.608027 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-09 00:32:13.608048 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-09 00:32:13.608069 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:32:13.608088 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-09 00:32:13.608107 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-09 00:32:13.608125 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:32:13.608144 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-09 00:32:13.608162 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-09 00:32:13.608176 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:32:13.608187 | orchestrator | 2026-03-09 00:32:13.608200 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-09 00:32:13.608212 | orchestrator | Monday 09 March 2026 00:31:54 +0000 (0:00:00.562) 0:06:33.123 ********** 2026-03-09 00:32:13.608256 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:32:13.608268 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:32:13.608278 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:32:13.608289 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:32:13.608300 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:32:13.608310 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:32:13.608321 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:32:13.608331 | orchestrator | 2026-03-09 00:32:13.608343 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-09 00:32:13.608354 | orchestrator | Monday 09 March 2026 00:31:54 +0000 (0:00:00.503) 0:06:33.626 ********** 2026-03-09 00:32:13.608367 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:32:13.608380 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:32:13.608392 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:32:13.608405 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:32:13.608417 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:32:13.608429 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:32:13.608442 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:32:13.608454 | orchestrator | 2026-03-09 00:32:13.608467 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-09 00:32:13.608479 | orchestrator | Monday 09 March 2026 00:31:55 +0000 (0:00:00.542) 0:06:34.169 ********** 2026-03-09 00:32:13.608492 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:32:13.608504 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:32:13.608517 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:32:13.608530 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:32:13.608542 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:32:13.608554 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:32:13.608605 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:32:13.608625 | orchestrator | 2026-03-09 00:32:13.608645 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-09 00:32:13.608661 | orchestrator | Monday 09 March 2026 00:31:56 +0000 (0:00:00.551) 0:06:34.721 ********** 2026-03-09 00:32:13.608672 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:32:13.608683 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:13.608693 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:32:13.608704 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:32:13.608714 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:32:13.608725 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:32:13.608735 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:32:13.608746 | orchestrator | 2026-03-09 00:32:13.608757 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-09 00:32:13.608768 | orchestrator | Monday 09 March 2026 00:31:58 +0000 (0:00:02.049) 0:06:36.770 ********** 2026-03-09 00:32:13.608780 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:32:13.608793 | orchestrator | 2026-03-09 00:32:13.608805 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-09 00:32:13.608815 | orchestrator | Monday 09 March 2026 00:31:59 +0000 (0:00:00.891) 0:06:37.662 ********** 2026-03-09 00:32:13.608841 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:32:13.608853 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:32:13.608863 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:32:13.608874 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:13.608885 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:32:13.608896 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:32:13.608906 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:32:13.608917 | orchestrator | 2026-03-09 00:32:13.608928 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-09 00:32:13.608939 | orchestrator | Monday 09 March 2026 00:31:59 +0000 (0:00:00.877) 0:06:38.540 ********** 2026-03-09 00:32:13.608959 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:32:13.608970 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:32:13.608980 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:32:13.608991 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:13.609002 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:32:13.609012 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:32:13.609023 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:32:13.609033 | orchestrator | 2026-03-09 00:32:13.609044 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-09 00:32:13.609055 | orchestrator | Monday 09 March 2026 00:32:01 +0000 (0:00:01.136) 0:06:39.676 ********** 2026-03-09 00:32:13.609066 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:32:13.609076 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:32:13.609087 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:32:13.609097 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:13.609108 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:32:13.609119 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:32:13.609129 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:32:13.609140 | orchestrator | 2026-03-09 00:32:13.609150 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-09 00:32:13.609182 | orchestrator | Monday 09 March 2026 00:32:02 +0000 (0:00:01.335) 0:06:41.012 ********** 2026-03-09 00:32:13.609194 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:32:13.609204 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:32:13.609215 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:32:13.609226 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:32:13.609236 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:32:13.609247 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:32:13.609257 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:32:13.609268 | orchestrator | 2026-03-09 00:32:13.609279 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-09 00:32:13.609290 | orchestrator | Monday 09 March 2026 00:32:03 +0000 (0:00:01.382) 0:06:42.394 ********** 2026-03-09 00:32:13.609300 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:32:13.609311 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:32:13.609322 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:32:13.609332 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:13.609343 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:32:13.609353 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:32:13.609364 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:32:13.609374 | orchestrator | 2026-03-09 00:32:13.609385 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-09 00:32:13.609396 | orchestrator | Monday 09 March 2026 00:32:05 +0000 (0:00:01.336) 0:06:43.731 ********** 2026-03-09 00:32:13.609406 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:32:13.609417 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:32:13.609428 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:32:13.609438 | orchestrator | changed: [testbed-manager] 2026-03-09 00:32:13.609449 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:32:13.609459 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:32:13.609470 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:32:13.609480 | orchestrator | 2026-03-09 00:32:13.609491 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-09 00:32:13.609502 | orchestrator | Monday 09 March 2026 00:32:06 +0000 (0:00:01.409) 0:06:45.140 ********** 2026-03-09 00:32:13.609513 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:32:13.609524 | orchestrator | 2026-03-09 00:32:13.609534 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-09 00:32:13.609545 | orchestrator | Monday 09 March 2026 00:32:07 +0000 (0:00:01.056) 0:06:46.197 ********** 2026-03-09 00:32:13.609572 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:32:13.609604 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:32:13.609628 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:32:13.609654 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:13.609673 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:32:13.609691 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:32:13.609709 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:32:13.609726 | orchestrator | 2026-03-09 00:32:13.609744 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-09 00:32:13.609761 | orchestrator | Monday 09 March 2026 00:32:09 +0000 (0:00:01.471) 0:06:47.668 ********** 2026-03-09 00:32:13.609779 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:32:13.609798 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:32:13.609817 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:32:13.609836 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:13.609856 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:32:13.609875 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:32:13.609890 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:32:13.609901 | orchestrator | 2026-03-09 00:32:13.609912 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-09 00:32:13.609923 | orchestrator | Monday 09 March 2026 00:32:10 +0000 (0:00:01.137) 0:06:48.806 ********** 2026-03-09 00:32:13.609934 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:32:13.609944 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:32:13.609955 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:32:13.609966 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:13.609976 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:32:13.609987 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:32:13.609998 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:32:13.610009 | orchestrator | 2026-03-09 00:32:13.610076 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-09 00:32:13.610088 | orchestrator | Monday 09 March 2026 00:32:11 +0000 (0:00:01.095) 0:06:49.902 ********** 2026-03-09 00:32:13.610099 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:32:13.610109 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:32:13.610120 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:32:13.610131 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:13.610142 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:32:13.610152 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:32:13.610163 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:32:13.610174 | orchestrator | 2026-03-09 00:32:13.610184 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-09 00:32:13.610195 | orchestrator | Monday 09 March 2026 00:32:12 +0000 (0:00:01.336) 0:06:51.238 ********** 2026-03-09 00:32:13.610206 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:32:13.610218 | orchestrator | 2026-03-09 00:32:13.610229 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-09 00:32:13.610239 | orchestrator | Monday 09 March 2026 00:32:13 +0000 (0:00:00.862) 0:06:52.100 ********** 2026-03-09 00:32:13.610250 | orchestrator | 2026-03-09 00:32:13.610261 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-09 00:32:13.610271 | orchestrator | Monday 09 March 2026 00:32:13 +0000 (0:00:00.038) 0:06:52.139 ********** 2026-03-09 00:32:13.610282 | orchestrator | 2026-03-09 00:32:13.610293 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-09 00:32:13.610304 | orchestrator | Monday 09 March 2026 00:32:13 +0000 (0:00:00.038) 0:06:52.177 ********** 2026-03-09 00:32:13.610315 | orchestrator | 2026-03-09 00:32:13.610326 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-09 00:32:13.610348 | orchestrator | Monday 09 March 2026 00:32:13 +0000 (0:00:00.054) 0:06:52.232 ********** 2026-03-09 00:32:40.721753 | orchestrator | 2026-03-09 00:32:40.721863 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-09 00:32:40.721905 | orchestrator | Monday 09 March 2026 00:32:13 +0000 (0:00:00.075) 0:06:52.307 ********** 2026-03-09 00:32:40.721918 | orchestrator | 2026-03-09 00:32:40.721929 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-09 00:32:40.721939 | orchestrator | Monday 09 March 2026 00:32:13 +0000 (0:00:00.039) 0:06:52.347 ********** 2026-03-09 00:32:40.721950 | orchestrator | 2026-03-09 00:32:40.721961 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-09 00:32:40.721972 | orchestrator | Monday 09 March 2026 00:32:13 +0000 (0:00:00.048) 0:06:52.396 ********** 2026-03-09 00:32:40.721982 | orchestrator | 2026-03-09 00:32:40.721993 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-09 00:32:40.722004 | orchestrator | Monday 09 March 2026 00:32:13 +0000 (0:00:00.041) 0:06:52.437 ********** 2026-03-09 00:32:40.722068 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:32:40.722083 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:32:40.722094 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:32:40.722105 | orchestrator | 2026-03-09 00:32:40.722117 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-09 00:32:40.722128 | orchestrator | Monday 09 March 2026 00:32:14 +0000 (0:00:01.143) 0:06:53.581 ********** 2026-03-09 00:32:40.722140 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:32:40.722152 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:32:40.722163 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:32:40.722174 | orchestrator | changed: [testbed-manager] 2026-03-09 00:32:40.722184 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:32:40.722195 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:32:40.722206 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:32:40.722218 | orchestrator | 2026-03-09 00:32:40.722229 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-09 00:32:40.722240 | orchestrator | Monday 09 March 2026 00:32:16 +0000 (0:00:01.566) 0:06:55.148 ********** 2026-03-09 00:32:40.722251 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:32:40.722262 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:32:40.722272 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:32:40.722283 | orchestrator | changed: [testbed-manager] 2026-03-09 00:32:40.722296 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:32:40.722309 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:32:40.722321 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:32:40.722333 | orchestrator | 2026-03-09 00:32:40.722346 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-09 00:32:40.722358 | orchestrator | Monday 09 March 2026 00:32:17 +0000 (0:00:01.240) 0:06:56.388 ********** 2026-03-09 00:32:40.722371 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:32:40.722383 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:32:40.722396 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:32:40.722409 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:32:40.722421 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:32:40.722433 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:32:40.722446 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:32:40.722459 | orchestrator | 2026-03-09 00:32:40.722471 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-09 00:32:40.722484 | orchestrator | Monday 09 March 2026 00:32:20 +0000 (0:00:02.950) 0:06:59.339 ********** 2026-03-09 00:32:40.722497 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:32:40.722509 | orchestrator | 2026-03-09 00:32:40.722521 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-09 00:32:40.722534 | orchestrator | Monday 09 March 2026 00:32:20 +0000 (0:00:00.104) 0:06:59.443 ********** 2026-03-09 00:32:40.722547 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:40.722580 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:32:40.722593 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:32:40.722606 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:32:40.722619 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:32:40.722632 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:32:40.722653 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:32:40.722664 | orchestrator | 2026-03-09 00:32:40.722675 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-09 00:32:40.722687 | orchestrator | Monday 09 March 2026 00:32:21 +0000 (0:00:01.007) 0:07:00.451 ********** 2026-03-09 00:32:40.722697 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:32:40.722708 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:32:40.722733 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:32:40.722744 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:32:40.722754 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:32:40.722765 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:32:40.722775 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:32:40.722786 | orchestrator | 2026-03-09 00:32:40.722797 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-09 00:32:40.722807 | orchestrator | Monday 09 March 2026 00:32:22 +0000 (0:00:00.721) 0:07:01.173 ********** 2026-03-09 00:32:40.722819 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:32:40.722832 | orchestrator | 2026-03-09 00:32:40.722842 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-09 00:32:40.722853 | orchestrator | Monday 09 March 2026 00:32:23 +0000 (0:00:00.948) 0:07:02.122 ********** 2026-03-09 00:32:40.722864 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:32:40.722874 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:32:40.722885 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:32:40.722895 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:40.722906 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:32:40.722916 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:32:40.722927 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:32:40.722937 | orchestrator | 2026-03-09 00:32:40.722948 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-09 00:32:40.722959 | orchestrator | Monday 09 March 2026 00:32:24 +0000 (0:00:00.904) 0:07:03.026 ********** 2026-03-09 00:32:40.722969 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-09 00:32:40.722998 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-09 00:32:40.723010 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-09 00:32:40.723020 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-09 00:32:40.723031 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-09 00:32:40.723042 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-09 00:32:40.723052 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-09 00:32:40.723063 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-09 00:32:40.723074 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-09 00:32:40.723085 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-09 00:32:40.723095 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-09 00:32:40.723106 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-09 00:32:40.723116 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-09 00:32:40.723127 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-09 00:32:40.723137 | orchestrator | 2026-03-09 00:32:40.723148 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-09 00:32:40.723159 | orchestrator | Monday 09 March 2026 00:32:27 +0000 (0:00:02.975) 0:07:06.002 ********** 2026-03-09 00:32:40.723170 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:32:40.723180 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:32:40.723191 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:32:40.723201 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:32:40.723212 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:32:40.723230 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:32:40.723240 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:32:40.723251 | orchestrator | 2026-03-09 00:32:40.723262 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-09 00:32:40.723272 | orchestrator | Monday 09 March 2026 00:32:27 +0000 (0:00:00.580) 0:07:06.583 ********** 2026-03-09 00:32:40.723285 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:32:40.723297 | orchestrator | 2026-03-09 00:32:40.723308 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-09 00:32:40.723319 | orchestrator | Monday 09 March 2026 00:32:28 +0000 (0:00:00.931) 0:07:07.515 ********** 2026-03-09 00:32:40.723329 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:32:40.723340 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:32:40.723350 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:32:40.723361 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:40.723371 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:32:40.723382 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:32:40.723392 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:32:40.723403 | orchestrator | 2026-03-09 00:32:40.723414 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-09 00:32:40.723424 | orchestrator | Monday 09 March 2026 00:32:29 +0000 (0:00:00.859) 0:07:08.374 ********** 2026-03-09 00:32:40.723435 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:32:40.723445 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:32:40.723456 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:32:40.723467 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:40.723477 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:32:40.723487 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:32:40.723498 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:32:40.723508 | orchestrator | 2026-03-09 00:32:40.723519 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-09 00:32:40.723530 | orchestrator | Monday 09 March 2026 00:32:30 +0000 (0:00:01.080) 0:07:09.455 ********** 2026-03-09 00:32:40.723540 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:32:40.723551 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:32:40.723604 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:32:40.723616 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:32:40.723627 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:32:40.723637 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:32:40.723648 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:32:40.723658 | orchestrator | 2026-03-09 00:32:40.723675 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-09 00:32:40.723687 | orchestrator | Monday 09 March 2026 00:32:31 +0000 (0:00:00.468) 0:07:09.923 ********** 2026-03-09 00:32:40.723697 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:32:40.723708 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:32:40.723719 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:32:40.723729 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:40.723740 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:32:40.723751 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:32:40.723761 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:32:40.723772 | orchestrator | 2026-03-09 00:32:40.723783 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-09 00:32:40.723793 | orchestrator | Monday 09 March 2026 00:32:32 +0000 (0:00:01.444) 0:07:11.368 ********** 2026-03-09 00:32:40.723804 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:32:40.723815 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:32:40.723825 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:32:40.723836 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:32:40.723846 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:32:40.723857 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:32:40.723877 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:32:40.723888 | orchestrator | 2026-03-09 00:32:40.723898 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-09 00:32:40.723909 | orchestrator | Monday 09 March 2026 00:32:33 +0000 (0:00:00.532) 0:07:11.900 ********** 2026-03-09 00:32:40.723920 | orchestrator | ok: [testbed-manager] 2026-03-09 00:32:40.723930 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:32:40.723941 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:32:40.723952 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:32:40.723962 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:32:40.723973 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:32:40.723990 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:14.071192 | orchestrator | 2026-03-09 00:33:14.071417 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-09 00:33:14.071444 | orchestrator | Monday 09 March 2026 00:32:40 +0000 (0:00:07.682) 0:07:19.583 ********** 2026-03-09 00:33:14.071460 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:14.071476 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:14.071492 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:14.071509 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:14.071525 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:14.071542 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:14.071660 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:14.071682 | orchestrator | 2026-03-09 00:33:14.071700 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-09 00:33:14.071718 | orchestrator | Monday 09 March 2026 00:32:42 +0000 (0:00:01.288) 0:07:20.871 ********** 2026-03-09 00:33:14.071736 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:14.071753 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:14.071770 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:14.071787 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:14.071804 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:14.071820 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:14.071837 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:14.071853 | orchestrator | 2026-03-09 00:33:14.071870 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-09 00:33:14.071887 | orchestrator | Monday 09 March 2026 00:32:43 +0000 (0:00:01.685) 0:07:22.557 ********** 2026-03-09 00:33:14.071903 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:14.071919 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:14.071936 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:14.071952 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:14.071968 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:14.071984 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:14.072000 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:14.072017 | orchestrator | 2026-03-09 00:33:14.072033 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-09 00:33:14.072049 | orchestrator | Monday 09 March 2026 00:32:45 +0000 (0:00:01.816) 0:07:24.373 ********** 2026-03-09 00:33:14.072065 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:14.072081 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:14.072097 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:14.072113 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:14.072129 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:14.072145 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:14.072161 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:14.072177 | orchestrator | 2026-03-09 00:33:14.072193 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-09 00:33:14.072209 | orchestrator | Monday 09 March 2026 00:32:46 +0000 (0:00:01.199) 0:07:25.573 ********** 2026-03-09 00:33:14.072225 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:33:14.072241 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:33:14.072257 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:33:14.072273 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:14.072322 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:33:14.072339 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:33:14.072355 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:33:14.072372 | orchestrator | 2026-03-09 00:33:14.072388 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-09 00:33:14.072405 | orchestrator | Monday 09 March 2026 00:32:47 +0000 (0:00:00.840) 0:07:26.413 ********** 2026-03-09 00:33:14.072421 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:33:14.072437 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:33:14.072453 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:33:14.072469 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:14.072485 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:33:14.072501 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:33:14.072517 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:33:14.072533 | orchestrator | 2026-03-09 00:33:14.072549 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-09 00:33:14.072606 | orchestrator | Monday 09 March 2026 00:32:48 +0000 (0:00:00.527) 0:07:26.940 ********** 2026-03-09 00:33:14.072624 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:14.072640 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:14.072657 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:14.072674 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:14.072691 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:14.072708 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:14.072726 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:14.072743 | orchestrator | 2026-03-09 00:33:14.072761 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-09 00:33:14.072778 | orchestrator | Monday 09 March 2026 00:32:48 +0000 (0:00:00.556) 0:07:27.497 ********** 2026-03-09 00:33:14.072796 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:14.072813 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:14.072831 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:14.072849 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:14.072866 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:14.072883 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:14.072900 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:14.072917 | orchestrator | 2026-03-09 00:33:14.072934 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-09 00:33:14.072951 | orchestrator | Monday 09 March 2026 00:32:49 +0000 (0:00:00.782) 0:07:28.279 ********** 2026-03-09 00:33:14.072968 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:14.072986 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:14.073003 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:14.073021 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:14.073038 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:14.073055 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:14.073074 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:14.073091 | orchestrator | 2026-03-09 00:33:14.073108 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-09 00:33:14.073126 | orchestrator | Monday 09 March 2026 00:32:50 +0000 (0:00:00.542) 0:07:28.822 ********** 2026-03-09 00:33:14.073143 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:14.073161 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:14.073179 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:14.073197 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:14.073214 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:14.073230 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:14.073247 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:14.073264 | orchestrator | 2026-03-09 00:33:14.073307 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-09 00:33:14.073349 | orchestrator | Monday 09 March 2026 00:32:55 +0000 (0:00:05.632) 0:07:34.454 ********** 2026-03-09 00:33:14.073366 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:33:14.073383 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:33:14.073400 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:33:14.073429 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:14.073446 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:33:14.073463 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:33:14.073479 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:33:14.073496 | orchestrator | 2026-03-09 00:33:14.073512 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-09 00:33:14.073618 | orchestrator | Monday 09 March 2026 00:32:56 +0000 (0:00:00.562) 0:07:35.017 ********** 2026-03-09 00:33:14.073644 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:33:14.073665 | orchestrator | 2026-03-09 00:33:14.073682 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-09 00:33:14.073700 | orchestrator | Monday 09 March 2026 00:32:57 +0000 (0:00:01.060) 0:07:36.077 ********** 2026-03-09 00:33:14.073718 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:14.073735 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:14.073753 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:14.073770 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:14.073788 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:14.073805 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:14.073821 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:14.073839 | orchestrator | 2026-03-09 00:33:14.073857 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-09 00:33:14.073874 | orchestrator | Monday 09 March 2026 00:32:59 +0000 (0:00:01.939) 0:07:38.016 ********** 2026-03-09 00:33:14.073891 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:14.073909 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:14.073926 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:14.073943 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:14.073960 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:14.073978 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:14.073994 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:14.074101 | orchestrator | 2026-03-09 00:33:14.074127 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-09 00:33:14.074146 | orchestrator | Monday 09 March 2026 00:33:00 +0000 (0:00:01.233) 0:07:39.250 ********** 2026-03-09 00:33:14.074166 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:14.074183 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:14.074200 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:14.074217 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:14.074234 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:14.074251 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:14.074268 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:14.074285 | orchestrator | 2026-03-09 00:33:14.074302 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-09 00:33:14.074320 | orchestrator | Monday 09 March 2026 00:33:01 +0000 (0:00:00.869) 0:07:40.119 ********** 2026-03-09 00:33:14.074337 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-09 00:33:14.074355 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-09 00:33:14.074372 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-09 00:33:14.074387 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-09 00:33:14.074403 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-09 00:33:14.074429 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-09 00:33:14.074461 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-09 00:33:14.074477 | orchestrator | 2026-03-09 00:33:14.074494 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-09 00:33:14.074510 | orchestrator | Monday 09 March 2026 00:33:03 +0000 (0:00:02.137) 0:07:42.257 ********** 2026-03-09 00:33:14.074528 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:33:14.074546 | orchestrator | 2026-03-09 00:33:14.074588 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-09 00:33:14.074609 | orchestrator | Monday 09 March 2026 00:33:04 +0000 (0:00:00.994) 0:07:43.251 ********** 2026-03-09 00:33:14.074628 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:14.074646 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:14.074664 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:14.074682 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:14.074700 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:14.074718 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:14.074736 | orchestrator | changed: [testbed-manager] 2026-03-09 00:33:14.074753 | orchestrator | 2026-03-09 00:33:14.074789 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-09 00:33:44.741850 | orchestrator | Monday 09 March 2026 00:33:14 +0000 (0:00:09.443) 0:07:52.695 ********** 2026-03-09 00:33:44.741956 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:44.741972 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:44.741983 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:44.741995 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:44.742005 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:44.742073 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:44.742087 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:44.742098 | orchestrator | 2026-03-09 00:33:44.742110 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-09 00:33:44.742121 | orchestrator | Monday 09 March 2026 00:33:16 +0000 (0:00:02.165) 0:07:54.861 ********** 2026-03-09 00:33:44.742132 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:44.742179 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:44.742190 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:44.742201 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:44.742211 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:44.742222 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:44.742233 | orchestrator | 2026-03-09 00:33:44.742244 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-09 00:33:44.742255 | orchestrator | Monday 09 March 2026 00:33:17 +0000 (0:00:01.340) 0:07:56.201 ********** 2026-03-09 00:33:44.742266 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:44.742278 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:44.742288 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:44.742299 | orchestrator | changed: [testbed-manager] 2026-03-09 00:33:44.742309 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:44.742320 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:44.742330 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:44.742341 | orchestrator | 2026-03-09 00:33:44.742352 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-09 00:33:44.742363 | orchestrator | 2026-03-09 00:33:44.742374 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-09 00:33:44.742387 | orchestrator | Monday 09 March 2026 00:33:19 +0000 (0:00:01.458) 0:07:57.660 ********** 2026-03-09 00:33:44.742400 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:33:44.742413 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:33:44.742425 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:33:44.742463 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:44.742476 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:33:44.742488 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:33:44.742501 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:33:44.742513 | orchestrator | 2026-03-09 00:33:44.742526 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-09 00:33:44.742538 | orchestrator | 2026-03-09 00:33:44.742572 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-09 00:33:44.742585 | orchestrator | Monday 09 March 2026 00:33:19 +0000 (0:00:00.491) 0:07:58.152 ********** 2026-03-09 00:33:44.742598 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:44.742610 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:44.742622 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:44.742635 | orchestrator | changed: [testbed-manager] 2026-03-09 00:33:44.742648 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:44.742660 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:44.742671 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:44.742684 | orchestrator | 2026-03-09 00:33:44.742710 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-09 00:33:44.742738 | orchestrator | Monday 09 March 2026 00:33:20 +0000 (0:00:01.333) 0:07:59.486 ********** 2026-03-09 00:33:44.742757 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:44.742776 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:44.742792 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:44.742803 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:44.742814 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:44.742824 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:44.742844 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:44.742855 | orchestrator | 2026-03-09 00:33:44.742866 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-09 00:33:44.742877 | orchestrator | Monday 09 March 2026 00:33:22 +0000 (0:00:01.450) 0:08:00.936 ********** 2026-03-09 00:33:44.742888 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:33:44.742901 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:33:44.742918 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:33:44.742935 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:33:44.742953 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:33:44.742969 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:33:44.742995 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:33:44.743006 | orchestrator | 2026-03-09 00:33:44.743017 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-09 00:33:44.743028 | orchestrator | Monday 09 March 2026 00:33:22 +0000 (0:00:00.630) 0:08:01.567 ********** 2026-03-09 00:33:44.743039 | orchestrator | included: osism.services.smartd for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:33:44.743052 | orchestrator | 2026-03-09 00:33:44.743062 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-09 00:33:44.743073 | orchestrator | Monday 09 March 2026 00:33:23 +0000 (0:00:00.855) 0:08:02.423 ********** 2026-03-09 00:33:44.743086 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:33:44.743099 | orchestrator | 2026-03-09 00:33:44.743110 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-09 00:33:44.743121 | orchestrator | Monday 09 March 2026 00:33:24 +0000 (0:00:00.804) 0:08:03.227 ********** 2026-03-09 00:33:44.743132 | orchestrator | changed: [testbed-manager] 2026-03-09 00:33:44.743142 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:44.743153 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:44.743163 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:44.743174 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:44.743184 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:44.743204 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:44.743215 | orchestrator | 2026-03-09 00:33:44.743246 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-09 00:33:44.743257 | orchestrator | Monday 09 March 2026 00:33:33 +0000 (0:00:08.764) 0:08:11.991 ********** 2026-03-09 00:33:44.743268 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:44.743278 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:44.743289 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:44.743299 | orchestrator | changed: [testbed-manager] 2026-03-09 00:33:44.743310 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:44.743320 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:44.743331 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:44.743341 | orchestrator | 2026-03-09 00:33:44.743352 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-09 00:33:44.743363 | orchestrator | Monday 09 March 2026 00:33:34 +0000 (0:00:00.869) 0:08:12.861 ********** 2026-03-09 00:33:44.743373 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:44.743384 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:44.743394 | orchestrator | changed: [testbed-manager] 2026-03-09 00:33:44.743405 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:44.743415 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:44.743425 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:44.743436 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:44.743446 | orchestrator | 2026-03-09 00:33:44.743457 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-09 00:33:44.743467 | orchestrator | Monday 09 March 2026 00:33:35 +0000 (0:00:01.325) 0:08:14.187 ********** 2026-03-09 00:33:44.743478 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:44.743488 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:44.743499 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:44.743509 | orchestrator | changed: [testbed-manager] 2026-03-09 00:33:44.743519 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:44.743530 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:44.743540 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:44.743571 | orchestrator | 2026-03-09 00:33:44.743583 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-09 00:33:44.743593 | orchestrator | Monday 09 March 2026 00:33:37 +0000 (0:00:01.950) 0:08:16.137 ********** 2026-03-09 00:33:44.743604 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:44.743614 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:44.743625 | orchestrator | changed: [testbed-manager] 2026-03-09 00:33:44.743635 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:44.743646 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:44.743656 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:44.743666 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:44.743677 | orchestrator | 2026-03-09 00:33:44.743687 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-09 00:33:44.743698 | orchestrator | Monday 09 March 2026 00:33:38 +0000 (0:00:01.256) 0:08:17.393 ********** 2026-03-09 00:33:44.743709 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:44.743720 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:44.743730 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:44.743741 | orchestrator | changed: [testbed-manager] 2026-03-09 00:33:44.743751 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:44.743762 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:44.743772 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:44.743783 | orchestrator | 2026-03-09 00:33:44.743793 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-09 00:33:44.743804 | orchestrator | 2026-03-09 00:33:44.743815 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-09 00:33:44.743826 | orchestrator | Monday 09 March 2026 00:33:39 +0000 (0:00:01.140) 0:08:18.534 ********** 2026-03-09 00:33:44.743836 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:33:44.743854 | orchestrator | 2026-03-09 00:33:44.743865 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-09 00:33:44.743876 | orchestrator | Monday 09 March 2026 00:33:40 +0000 (0:00:00.987) 0:08:19.521 ********** 2026-03-09 00:33:44.743886 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:44.743897 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:44.743907 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:44.743918 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:44.743928 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:44.743939 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:44.743949 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:44.743960 | orchestrator | 2026-03-09 00:33:44.743976 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-09 00:33:44.743987 | orchestrator | Monday 09 March 2026 00:33:41 +0000 (0:00:00.846) 0:08:20.368 ********** 2026-03-09 00:33:44.743998 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:44.744008 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:44.744038 | orchestrator | changed: [testbed-manager] 2026-03-09 00:33:44.744072 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:44.744084 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:44.744094 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:44.744105 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:44.744116 | orchestrator | 2026-03-09 00:33:44.744127 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-09 00:33:44.744137 | orchestrator | Monday 09 March 2026 00:33:42 +0000 (0:00:01.124) 0:08:21.492 ********** 2026-03-09 00:33:44.744148 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:33:44.744159 | orchestrator | 2026-03-09 00:33:44.744170 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-09 00:33:44.744180 | orchestrator | Monday 09 March 2026 00:33:43 +0000 (0:00:01.030) 0:08:22.523 ********** 2026-03-09 00:33:44.744191 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:33:44.744201 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:33:44.744212 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:33:44.744222 | orchestrator | ok: [testbed-manager] 2026-03-09 00:33:44.744233 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:33:44.744243 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:33:44.744254 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:33:44.744265 | orchestrator | 2026-03-09 00:33:44.744284 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-09 00:33:46.223186 | orchestrator | Monday 09 March 2026 00:33:44 +0000 (0:00:00.845) 0:08:23.368 ********** 2026-03-09 00:33:46.223302 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:33:46.223318 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:33:46.223329 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:33:46.223339 | orchestrator | changed: [testbed-manager] 2026-03-09 00:33:46.223348 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:33:46.223358 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:33:46.223368 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:33:46.223377 | orchestrator | 2026-03-09 00:33:46.223388 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:33:46.223399 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-09 00:33:46.223410 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-09 00:33:46.223420 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-09 00:33:46.223460 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-09 00:33:46.223470 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-09 00:33:46.223480 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-09 00:33:46.223490 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-09 00:33:46.223500 | orchestrator | 2026-03-09 00:33:46.223509 | orchestrator | 2026-03-09 00:33:46.223519 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:33:46.223529 | orchestrator | Monday 09 March 2026 00:33:45 +0000 (0:00:01.110) 0:08:24.479 ********** 2026-03-09 00:33:46.223539 | orchestrator | =============================================================================== 2026-03-09 00:33:46.223548 | orchestrator | osism.commons.packages : Install required packages --------------------- 86.94s 2026-03-09 00:33:46.223624 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.51s 2026-03-09 00:33:46.223634 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.92s 2026-03-09 00:33:46.223643 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.46s 2026-03-09 00:33:46.223653 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.47s 2026-03-09 00:33:46.223663 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.28s 2026-03-09 00:33:46.223673 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.90s 2026-03-09 00:33:46.223682 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.23s 2026-03-09 00:33:46.223691 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.44s 2026-03-09 00:33:46.223701 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.33s 2026-03-09 00:33:46.223711 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.99s 2026-03-09 00:33:46.223723 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.76s 2026-03-09 00:33:46.223734 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.43s 2026-03-09 00:33:46.223762 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.24s 2026-03-09 00:33:46.223774 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.14s 2026-03-09 00:33:46.223784 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.68s 2026-03-09 00:33:46.223795 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.92s 2026-03-09 00:33:46.223807 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.89s 2026-03-09 00:33:46.223817 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.05s 2026-03-09 00:33:46.223829 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.77s 2026-03-09 00:33:46.549896 | orchestrator | + osism apply fail2ban 2026-03-09 00:33:59.621996 | orchestrator | 2026-03-09 00:33:59 | INFO  | Prepare task for execution of fail2ban. 2026-03-09 00:33:59.706735 | orchestrator | 2026-03-09 00:33:59 | INFO  | Task 5a4b749d-7935-499c-98b1-224e9aca521f (fail2ban) was prepared for execution. 2026-03-09 00:33:59.706837 | orchestrator | 2026-03-09 00:33:59 | INFO  | It takes a moment until task 5a4b749d-7935-499c-98b1-224e9aca521f (fail2ban) has been started and output is visible here. 2026-03-09 00:34:22.346165 | orchestrator | 2026-03-09 00:34:22.346280 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-09 00:34:22.346297 | orchestrator | 2026-03-09 00:34:22.346310 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-09 00:34:22.346351 | orchestrator | Monday 09 March 2026 00:34:04 +0000 (0:00:00.291) 0:00:00.291 ********** 2026-03-09 00:34:22.346365 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:34:22.346379 | orchestrator | 2026-03-09 00:34:22.346391 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-09 00:34:22.346402 | orchestrator | Monday 09 March 2026 00:34:05 +0000 (0:00:01.145) 0:00:01.436 ********** 2026-03-09 00:34:22.346412 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:22.346425 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:22.346435 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:22.346446 | orchestrator | changed: [testbed-manager] 2026-03-09 00:34:22.346457 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:22.346467 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:22.346478 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:22.346489 | orchestrator | 2026-03-09 00:34:22.346499 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-09 00:34:22.346510 | orchestrator | Monday 09 March 2026 00:34:17 +0000 (0:00:11.491) 0:00:12.928 ********** 2026-03-09 00:34:22.346521 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:22.346532 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:22.346621 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:22.346635 | orchestrator | changed: [testbed-manager] 2026-03-09 00:34:22.346645 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:22.346656 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:22.346681 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:22.346692 | orchestrator | 2026-03-09 00:34:22.346703 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-09 00:34:22.346725 | orchestrator | Monday 09 March 2026 00:34:18 +0000 (0:00:01.497) 0:00:14.426 ********** 2026-03-09 00:34:22.346736 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:34:22.346748 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:34:22.346759 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:34:22.346770 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:34:22.346780 | orchestrator | ok: [testbed-manager] 2026-03-09 00:34:22.346791 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:34:22.346802 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:34:22.346812 | orchestrator | 2026-03-09 00:34:22.346823 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-09 00:34:22.346834 | orchestrator | Monday 09 March 2026 00:34:20 +0000 (0:00:01.583) 0:00:16.009 ********** 2026-03-09 00:34:22.346845 | orchestrator | changed: [testbed-manager] 2026-03-09 00:34:22.346856 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:34:22.346867 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:34:22.346878 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:34:22.346889 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:34:22.346900 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:34:22.346910 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:34:22.346921 | orchestrator | 2026-03-09 00:34:22.346932 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:34:22.346944 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:34:22.346956 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:34:22.346967 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:34:22.346978 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:34:22.346998 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:34:22.347024 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:34:22.347036 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:34:22.347047 | orchestrator | 2026-03-09 00:34:22.347058 | orchestrator | 2026-03-09 00:34:22.347068 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:34:22.347079 | orchestrator | Monday 09 March 2026 00:34:22 +0000 (0:00:01.650) 0:00:17.660 ********** 2026-03-09 00:34:22.347090 | orchestrator | =============================================================================== 2026-03-09 00:34:22.347101 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.49s 2026-03-09 00:34:22.347112 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.65s 2026-03-09 00:34:22.347122 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.58s 2026-03-09 00:34:22.347133 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.50s 2026-03-09 00:34:22.347144 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.15s 2026-03-09 00:34:22.665489 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-09 00:34:22.665643 | orchestrator | + osism apply network 2026-03-09 00:34:34.951944 | orchestrator | 2026-03-09 00:34:34 | INFO  | Prepare task for execution of network. 2026-03-09 00:34:35.034637 | orchestrator | 2026-03-09 00:34:35 | INFO  | Task 948bdb2a-9348-43ac-a7f0-1850acae5ddc (network) was prepared for execution. 2026-03-09 00:34:35.034730 | orchestrator | 2026-03-09 00:34:35 | INFO  | It takes a moment until task 948bdb2a-9348-43ac-a7f0-1850acae5ddc (network) has been started and output is visible here. 2026-03-09 00:35:04.873804 | orchestrator | 2026-03-09 00:35:04.873915 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-09 00:35:04.873933 | orchestrator | 2026-03-09 00:35:04.873947 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-09 00:35:04.873959 | orchestrator | Monday 09 March 2026 00:34:39 +0000 (0:00:00.272) 0:00:00.272 ********** 2026-03-09 00:35:04.873971 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:04.873983 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:35:04.873994 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:35:04.874005 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:35:04.874073 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:35:04.874086 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:35:04.874096 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:35:04.874107 | orchestrator | 2026-03-09 00:35:04.874118 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-09 00:35:04.874130 | orchestrator | Monday 09 March 2026 00:34:40 +0000 (0:00:00.839) 0:00:01.111 ********** 2026-03-09 00:35:04.874143 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:35:04.874157 | orchestrator | 2026-03-09 00:35:04.874168 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-09 00:35:04.874179 | orchestrator | Monday 09 March 2026 00:34:41 +0000 (0:00:01.256) 0:00:02.368 ********** 2026-03-09 00:35:04.874190 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:04.874201 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:35:04.874212 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:35:04.874223 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:35:04.874233 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:35:04.874244 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:35:04.874280 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:35:04.874292 | orchestrator | 2026-03-09 00:35:04.874303 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-09 00:35:04.874314 | orchestrator | Monday 09 March 2026 00:34:43 +0000 (0:00:02.069) 0:00:04.438 ********** 2026-03-09 00:35:04.874325 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:04.874336 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:35:04.874350 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:35:04.874364 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:35:04.874377 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:35:04.874389 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:35:04.874402 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:35:04.874416 | orchestrator | 2026-03-09 00:35:04.874429 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-09 00:35:04.874442 | orchestrator | Monday 09 March 2026 00:34:45 +0000 (0:00:01.820) 0:00:06.258 ********** 2026-03-09 00:35:04.874456 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-09 00:35:04.874470 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-09 00:35:04.874483 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-09 00:35:04.874496 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-09 00:35:04.874509 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-09 00:35:04.874522 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-09 00:35:04.874533 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-09 00:35:04.874544 | orchestrator | 2026-03-09 00:35:04.874590 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-09 00:35:04.874607 | orchestrator | Monday 09 March 2026 00:34:46 +0000 (0:00:01.029) 0:00:07.287 ********** 2026-03-09 00:35:04.874623 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 00:35:04.874639 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 00:35:04.874658 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-09 00:35:04.874678 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-09 00:35:04.874696 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-09 00:35:04.874712 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-09 00:35:04.874727 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-09 00:35:04.874738 | orchestrator | 2026-03-09 00:35:04.874749 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-09 00:35:04.874761 | orchestrator | Monday 09 March 2026 00:34:50 +0000 (0:00:03.458) 0:00:10.745 ********** 2026-03-09 00:35:04.874771 | orchestrator | changed: [testbed-manager] 2026-03-09 00:35:04.874782 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:35:04.874793 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:35:04.874804 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:35:04.874814 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:35:04.874825 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:35:04.874835 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:35:04.874846 | orchestrator | 2026-03-09 00:35:04.874857 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-09 00:35:04.874868 | orchestrator | Monday 09 March 2026 00:34:51 +0000 (0:00:01.594) 0:00:12.340 ********** 2026-03-09 00:35:04.874899 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 00:35:04.874910 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 00:35:04.874921 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-09 00:35:04.874932 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-09 00:35:04.874942 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-09 00:35:04.874953 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-09 00:35:04.874964 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-09 00:35:04.874975 | orchestrator | 2026-03-09 00:35:04.874985 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-09 00:35:04.874996 | orchestrator | Monday 09 March 2026 00:34:53 +0000 (0:00:01.914) 0:00:14.254 ********** 2026-03-09 00:35:04.875007 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:04.875027 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:35:04.875037 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:35:04.875048 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:35:04.875059 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:35:04.875070 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:35:04.875080 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:35:04.875091 | orchestrator | 2026-03-09 00:35:04.875102 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-09 00:35:04.875131 | orchestrator | Monday 09 March 2026 00:34:54 +0000 (0:00:01.163) 0:00:15.418 ********** 2026-03-09 00:35:04.875142 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:35:04.875153 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:35:04.875164 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:35:04.875174 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:35:04.875185 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:35:04.875196 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:35:04.875206 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:35:04.875217 | orchestrator | 2026-03-09 00:35:04.875228 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-09 00:35:04.875239 | orchestrator | Monday 09 March 2026 00:34:55 +0000 (0:00:00.712) 0:00:16.130 ********** 2026-03-09 00:35:04.875250 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:04.875268 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:35:04.875285 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:35:04.875302 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:35:04.875320 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:35:04.875337 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:35:04.875353 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:35:04.875368 | orchestrator | 2026-03-09 00:35:04.875387 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-09 00:35:04.875405 | orchestrator | Monday 09 March 2026 00:34:57 +0000 (0:00:02.333) 0:00:18.463 ********** 2026-03-09 00:35:04.875424 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:35:04.875443 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:35:04.875462 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:35:04.875476 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:35:04.875487 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:35:04.875498 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:35:04.875509 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-09 00:35:04.875521 | orchestrator | 2026-03-09 00:35:04.875532 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-09 00:35:04.875543 | orchestrator | Monday 09 March 2026 00:34:58 +0000 (0:00:00.940) 0:00:19.403 ********** 2026-03-09 00:35:04.875580 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:04.875592 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:35:04.875603 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:35:04.875613 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:35:04.875624 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:35:04.875635 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:35:04.875645 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:35:04.875656 | orchestrator | 2026-03-09 00:35:04.875667 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-09 00:35:04.875677 | orchestrator | Monday 09 March 2026 00:35:00 +0000 (0:00:01.715) 0:00:21.119 ********** 2026-03-09 00:35:04.875689 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:35:04.875702 | orchestrator | 2026-03-09 00:35:04.875713 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-09 00:35:04.875724 | orchestrator | Monday 09 March 2026 00:35:01 +0000 (0:00:01.288) 0:00:22.407 ********** 2026-03-09 00:35:04.875745 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:04.875756 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:35:04.875767 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:35:04.875777 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:35:04.875788 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:35:04.875799 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:35:04.875809 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:35:04.875820 | orchestrator | 2026-03-09 00:35:04.875831 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-09 00:35:04.875842 | orchestrator | Monday 09 March 2026 00:35:02 +0000 (0:00:00.980) 0:00:23.388 ********** 2026-03-09 00:35:04.875853 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:04.875864 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:35:04.875874 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:35:04.875885 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:35:04.875895 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:35:04.875906 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:35:04.875917 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:35:04.875927 | orchestrator | 2026-03-09 00:35:04.875946 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-09 00:35:04.875958 | orchestrator | Monday 09 March 2026 00:35:03 +0000 (0:00:00.830) 0:00:24.219 ********** 2026-03-09 00:35:04.875969 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-09 00:35:04.875980 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-09 00:35:04.875991 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-09 00:35:04.876001 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-09 00:35:04.876013 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-09 00:35:04.876023 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-09 00:35:04.876034 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-09 00:35:04.876045 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-09 00:35:04.876056 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-09 00:35:04.876066 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-09 00:35:04.876077 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-09 00:35:04.876088 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-09 00:35:04.876098 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-09 00:35:04.876109 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-09 00:35:04.876120 | orchestrator | 2026-03-09 00:35:04.876141 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-09 00:35:20.559889 | orchestrator | Monday 09 March 2026 00:35:04 +0000 (0:00:01.294) 0:00:25.513 ********** 2026-03-09 00:35:20.559981 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:35:20.559993 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:35:20.560001 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:35:20.560008 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:35:20.560015 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:35:20.560021 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:35:20.560027 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:35:20.560033 | orchestrator | 2026-03-09 00:35:20.560040 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-09 00:35:20.560048 | orchestrator | Monday 09 March 2026 00:35:05 +0000 (0:00:00.607) 0:00:26.121 ********** 2026-03-09 00:35:20.560057 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-node-3, testbed-node-2, testbed-manager, testbed-node-0, testbed-node-4, testbed-node-5 2026-03-09 00:35:20.560087 | orchestrator | 2026-03-09 00:35:20.560094 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-09 00:35:20.560101 | orchestrator | Monday 09 March 2026 00:35:10 +0000 (0:00:04.697) 0:00:30.818 ********** 2026-03-09 00:35:20.560110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:35:20.560117 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:35:20.560125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:35:20.560132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:35:20.560138 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:35:20.560145 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:35:20.560165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:35:20.560172 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:35:20.560178 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:35:20.560185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:35:20.560197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:35:20.560219 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:35:20.560225 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:35:20.560238 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:35:20.560244 | orchestrator | 2026-03-09 00:35:20.560251 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-09 00:35:20.560257 | orchestrator | Monday 09 March 2026 00:35:15 +0000 (0:00:05.300) 0:00:36.119 ********** 2026-03-09 00:35:20.560264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:35:20.560271 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:35:20.560277 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:35:20.560284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:35:20.560290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:35:20.560297 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:35:20.560304 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-09 00:35:20.560313 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:35:20.560320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:35:20.560327 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:35:20.560333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:35:20.560339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:35:20.560358 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:35:34.067047 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-09 00:35:34.067159 | orchestrator | 2026-03-09 00:35:34.067177 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-09 00:35:34.067191 | orchestrator | Monday 09 March 2026 00:35:20 +0000 (0:00:05.271) 0:00:41.390 ********** 2026-03-09 00:35:34.067204 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:35:34.067216 | orchestrator | 2026-03-09 00:35:34.067227 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-09 00:35:34.067239 | orchestrator | Monday 09 March 2026 00:35:22 +0000 (0:00:01.333) 0:00:42.724 ********** 2026-03-09 00:35:34.067250 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:34.067263 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:35:34.067274 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:35:34.067285 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:35:34.067296 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:35:34.067306 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:35:34.067317 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:35:34.067328 | orchestrator | 2026-03-09 00:35:34.067339 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-09 00:35:34.067350 | orchestrator | Monday 09 March 2026 00:35:23 +0000 (0:00:01.203) 0:00:43.928 ********** 2026-03-09 00:35:34.067361 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-09 00:35:34.067373 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-09 00:35:34.067384 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-09 00:35:34.067395 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-09 00:35:34.067405 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:35:34.067417 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-09 00:35:34.067428 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-09 00:35:34.067439 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-09 00:35:34.067450 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-09 00:35:34.067460 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-09 00:35:34.067471 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-09 00:35:34.067482 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-09 00:35:34.067493 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-09 00:35:34.067504 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:35:34.067515 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-09 00:35:34.067526 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-09 00:35:34.067536 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-09 00:35:34.067592 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-09 00:35:34.067630 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:35:34.067644 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-09 00:35:34.067657 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-09 00:35:34.067668 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-09 00:35:34.067679 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-09 00:35:34.067690 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:35:34.067701 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-09 00:35:34.067712 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-09 00:35:34.067723 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-09 00:35:34.067734 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-09 00:35:34.067745 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:35:34.067755 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:35:34.067766 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-09 00:35:34.067777 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-09 00:35:34.067788 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-09 00:35:34.067798 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-09 00:35:34.067809 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:35:34.067820 | orchestrator | 2026-03-09 00:35:34.067831 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-03-09 00:35:34.067860 | orchestrator | Monday 09 March 2026 00:35:24 +0000 (0:00:00.926) 0:00:44.854 ********** 2026-03-09 00:35:34.067876 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:35:34.067896 | orchestrator | 2026-03-09 00:35:34.067914 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-03-09 00:35:34.067931 | orchestrator | Monday 09 March 2026 00:35:25 +0000 (0:00:01.275) 0:00:46.130 ********** 2026-03-09 00:35:34.067951 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:35:34.067971 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:35:34.067987 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:35:34.067997 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:35:34.068008 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:35:34.068019 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:35:34.068029 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:35:34.068040 | orchestrator | 2026-03-09 00:35:34.068051 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-03-09 00:35:34.068062 | orchestrator | Monday 09 March 2026 00:35:26 +0000 (0:00:00.628) 0:00:46.758 ********** 2026-03-09 00:35:34.068072 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:35:34.068083 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:35:34.068094 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:35:34.068104 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:35:34.068115 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:35:34.068126 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:35:34.068136 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:35:34.068147 | orchestrator | 2026-03-09 00:35:34.068157 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-03-09 00:35:34.068168 | orchestrator | Monday 09 March 2026 00:35:26 +0000 (0:00:00.842) 0:00:47.601 ********** 2026-03-09 00:35:34.068180 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:35:34.068191 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:35:34.068201 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:35:34.068223 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:35:34.068234 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:35:34.068244 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:35:34.068255 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:35:34.068266 | orchestrator | 2026-03-09 00:35:34.068277 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-03-09 00:35:34.068287 | orchestrator | Monday 09 March 2026 00:35:27 +0000 (0:00:00.645) 0:00:48.246 ********** 2026-03-09 00:35:34.068298 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:34.068309 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:35:34.068320 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:35:34.068331 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:35:34.068342 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:35:34.068352 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:35:34.068363 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:35:34.068374 | orchestrator | 2026-03-09 00:35:34.068385 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-03-09 00:35:34.068396 | orchestrator | Monday 09 March 2026 00:35:29 +0000 (0:00:01.759) 0:00:50.006 ********** 2026-03-09 00:35:34.068407 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:34.068418 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:35:34.068428 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:35:34.068439 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:35:34.068450 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:35:34.068460 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:35:34.068471 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:35:34.068481 | orchestrator | 2026-03-09 00:35:34.068493 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-03-09 00:35:34.068503 | orchestrator | Monday 09 March 2026 00:35:30 +0000 (0:00:01.019) 0:00:51.025 ********** 2026-03-09 00:35:34.068514 | orchestrator | ok: [testbed-manager] 2026-03-09 00:35:34.068525 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:35:34.068535 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:35:34.068546 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:35:34.068589 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:35:34.068601 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:35:34.068611 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:35:34.068622 | orchestrator | 2026-03-09 00:35:34.068632 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-09 00:35:34.068643 | orchestrator | Monday 09 March 2026 00:35:32 +0000 (0:00:02.324) 0:00:53.350 ********** 2026-03-09 00:35:34.068654 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:35:34.068665 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:35:34.068675 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:35:34.068686 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:35:34.068697 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:35:34.068707 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:35:34.068718 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:35:34.068729 | orchestrator | 2026-03-09 00:35:34.068740 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-09 00:35:34.068751 | orchestrator | Monday 09 March 2026 00:35:33 +0000 (0:00:00.829) 0:00:54.179 ********** 2026-03-09 00:35:34.068762 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:35:34.068772 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:35:34.068783 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:35:34.068794 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:35:34.068804 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:35:34.068815 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:35:34.068825 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:35:34.068841 | orchestrator | 2026-03-09 00:35:34.068862 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:35:34.068881 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-09 00:35:34.068902 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-09 00:35:34.068923 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-09 00:35:34.413911 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-09 00:35:34.413982 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-09 00:35:34.413987 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-09 00:35:34.413992 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-09 00:35:34.413996 | orchestrator | 2026-03-09 00:35:34.414000 | orchestrator | 2026-03-09 00:35:34.414004 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:35:34.414009 | orchestrator | Monday 09 March 2026 00:35:34 +0000 (0:00:00.526) 0:00:54.706 ********** 2026-03-09 00:35:34.414013 | orchestrator | =============================================================================== 2026-03-09 00:35:34.414055 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.30s 2026-03-09 00:35:34.414059 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.27s 2026-03-09 00:35:34.414063 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.70s 2026-03-09 00:35:34.414067 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.46s 2026-03-09 00:35:34.414070 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.33s 2026-03-09 00:35:34.414074 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.32s 2026-03-09 00:35:34.414078 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.07s 2026-03-09 00:35:34.414081 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.91s 2026-03-09 00:35:34.414085 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.82s 2026-03-09 00:35:34.414089 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.76s 2026-03-09 00:35:34.414092 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.72s 2026-03-09 00:35:34.414096 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.59s 2026-03-09 00:35:34.414100 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.33s 2026-03-09 00:35:34.414104 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.29s 2026-03-09 00:35:34.414108 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.29s 2026-03-09 00:35:34.414111 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.28s 2026-03-09 00:35:34.414115 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.26s 2026-03-09 00:35:34.414119 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.20s 2026-03-09 00:35:34.414123 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.16s 2026-03-09 00:35:34.414126 | orchestrator | osism.commons.network : Create required directories --------------------- 1.03s 2026-03-09 00:35:34.718780 | orchestrator | + osism apply wireguard 2026-03-09 00:35:46.885201 | orchestrator | 2026-03-09 00:35:46 | INFO  | Prepare task for execution of wireguard. 2026-03-09 00:35:46.956005 | orchestrator | 2026-03-09 00:35:46 | INFO  | Task baad3489-9280-42b2-8eec-edb1baac1d49 (wireguard) was prepared for execution. 2026-03-09 00:35:46.956157 | orchestrator | 2026-03-09 00:35:46 | INFO  | It takes a moment until task baad3489-9280-42b2-8eec-edb1baac1d49 (wireguard) has been started and output is visible here. 2026-03-09 00:36:06.547460 | orchestrator | 2026-03-09 00:36:06.547634 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-09 00:36:06.547655 | orchestrator | 2026-03-09 00:36:06.547668 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-09 00:36:06.547680 | orchestrator | Monday 09 March 2026 00:35:51 +0000 (0:00:00.215) 0:00:00.215 ********** 2026-03-09 00:36:06.547691 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:06.547703 | orchestrator | 2026-03-09 00:36:06.547714 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-09 00:36:06.547725 | orchestrator | Monday 09 March 2026 00:35:52 +0000 (0:00:01.521) 0:00:01.737 ********** 2026-03-09 00:36:06.547736 | orchestrator | changed: [testbed-manager] 2026-03-09 00:36:06.547748 | orchestrator | 2026-03-09 00:36:06.547759 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-09 00:36:06.547770 | orchestrator | Monday 09 March 2026 00:35:59 +0000 (0:00:06.335) 0:00:08.072 ********** 2026-03-09 00:36:06.547781 | orchestrator | changed: [testbed-manager] 2026-03-09 00:36:06.547792 | orchestrator | 2026-03-09 00:36:06.547802 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-09 00:36:06.547813 | orchestrator | Monday 09 March 2026 00:35:59 +0000 (0:00:00.533) 0:00:08.605 ********** 2026-03-09 00:36:06.547824 | orchestrator | changed: [testbed-manager] 2026-03-09 00:36:06.547835 | orchestrator | 2026-03-09 00:36:06.547845 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-09 00:36:06.547856 | orchestrator | Monday 09 March 2026 00:36:00 +0000 (0:00:00.437) 0:00:09.043 ********** 2026-03-09 00:36:06.547867 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:06.547878 | orchestrator | 2026-03-09 00:36:06.547889 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-09 00:36:06.547899 | orchestrator | Monday 09 March 2026 00:36:00 +0000 (0:00:00.671) 0:00:09.715 ********** 2026-03-09 00:36:06.547910 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:06.547921 | orchestrator | 2026-03-09 00:36:06.547932 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-09 00:36:06.547943 | orchestrator | Monday 09 March 2026 00:36:01 +0000 (0:00:00.459) 0:00:10.175 ********** 2026-03-09 00:36:06.547953 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:06.547964 | orchestrator | 2026-03-09 00:36:06.547975 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-09 00:36:06.547986 | orchestrator | Monday 09 March 2026 00:36:01 +0000 (0:00:00.418) 0:00:10.593 ********** 2026-03-09 00:36:06.547997 | orchestrator | changed: [testbed-manager] 2026-03-09 00:36:06.548010 | orchestrator | 2026-03-09 00:36:06.548023 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-09 00:36:06.548037 | orchestrator | Monday 09 March 2026 00:36:02 +0000 (0:00:01.117) 0:00:11.711 ********** 2026-03-09 00:36:06.548049 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-09 00:36:06.548062 | orchestrator | changed: [testbed-manager] 2026-03-09 00:36:06.548075 | orchestrator | 2026-03-09 00:36:06.548088 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-09 00:36:06.548101 | orchestrator | Monday 09 March 2026 00:36:03 +0000 (0:00:00.920) 0:00:12.632 ********** 2026-03-09 00:36:06.548114 | orchestrator | changed: [testbed-manager] 2026-03-09 00:36:06.548127 | orchestrator | 2026-03-09 00:36:06.548144 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-09 00:36:06.548189 | orchestrator | Monday 09 March 2026 00:36:05 +0000 (0:00:01.621) 0:00:14.253 ********** 2026-03-09 00:36:06.548209 | orchestrator | changed: [testbed-manager] 2026-03-09 00:36:06.548228 | orchestrator | 2026-03-09 00:36:06.548249 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:36:06.548268 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:36:06.548314 | orchestrator | 2026-03-09 00:36:06.548329 | orchestrator | 2026-03-09 00:36:06.548346 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:36:06.548365 | orchestrator | Monday 09 March 2026 00:36:06 +0000 (0:00:00.912) 0:00:15.166 ********** 2026-03-09 00:36:06.548383 | orchestrator | =============================================================================== 2026-03-09 00:36:06.548400 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.34s 2026-03-09 00:36:06.548418 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.62s 2026-03-09 00:36:06.548433 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.52s 2026-03-09 00:36:06.548451 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.12s 2026-03-09 00:36:06.548469 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.92s 2026-03-09 00:36:06.548488 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.91s 2026-03-09 00:36:06.548507 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.67s 2026-03-09 00:36:06.548525 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.53s 2026-03-09 00:36:06.548542 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.46s 2026-03-09 00:36:06.548603 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.44s 2026-03-09 00:36:06.548616 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2026-03-09 00:36:06.862395 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-09 00:36:06.901862 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-09 00:36:06.901954 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-09 00:36:06.981872 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 187 0 --:--:-- --:--:-- --:--:-- 189 2026-03-09 00:36:06.994909 | orchestrator | + osism apply --environment custom workarounds 2026-03-09 00:36:09.019741 | orchestrator | 2026-03-09 00:36:09 | INFO  | Trying to run play workarounds in environment custom 2026-03-09 00:36:19.098857 | orchestrator | 2026-03-09 00:36:19 | INFO  | Prepare task for execution of workarounds. 2026-03-09 00:36:19.179510 | orchestrator | 2026-03-09 00:36:19 | INFO  | Task 7b5d9ef0-7522-47a5-8c9f-f4ca1f411f6c (workarounds) was prepared for execution. 2026-03-09 00:36:19.179659 | orchestrator | 2026-03-09 00:36:19 | INFO  | It takes a moment until task 7b5d9ef0-7522-47a5-8c9f-f4ca1f411f6c (workarounds) has been started and output is visible here. 2026-03-09 00:36:43.977049 | orchestrator | 2026-03-09 00:36:43.977163 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:36:43.977180 | orchestrator | 2026-03-09 00:36:43.977192 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-09 00:36:43.977204 | orchestrator | Monday 09 March 2026 00:36:23 +0000 (0:00:00.132) 0:00:00.132 ********** 2026-03-09 00:36:43.977216 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-09 00:36:43.977228 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-09 00:36:43.977239 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-09 00:36:43.977249 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-09 00:36:43.977261 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-09 00:36:43.977271 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-09 00:36:43.977283 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-09 00:36:43.977294 | orchestrator | 2026-03-09 00:36:43.977305 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-09 00:36:43.977337 | orchestrator | 2026-03-09 00:36:43.977349 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-09 00:36:43.977360 | orchestrator | Monday 09 March 2026 00:36:24 +0000 (0:00:00.803) 0:00:00.935 ********** 2026-03-09 00:36:43.977371 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:43.977383 | orchestrator | 2026-03-09 00:36:43.977393 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-09 00:36:43.977404 | orchestrator | 2026-03-09 00:36:43.977415 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-09 00:36:43.977426 | orchestrator | Monday 09 March 2026 00:36:26 +0000 (0:00:02.026) 0:00:02.962 ********** 2026-03-09 00:36:43.977437 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:43.977448 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:43.977458 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:43.977469 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:43.977479 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:43.977490 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:43.977500 | orchestrator | 2026-03-09 00:36:43.977511 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-09 00:36:43.977522 | orchestrator | 2026-03-09 00:36:43.977533 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-09 00:36:43.977544 | orchestrator | Monday 09 March 2026 00:36:27 +0000 (0:00:01.717) 0:00:04.679 ********** 2026-03-09 00:36:43.977624 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-09 00:36:43.977641 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-09 00:36:43.977654 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-09 00:36:43.977666 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-09 00:36:43.977679 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-09 00:36:43.977692 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-09 00:36:43.977704 | orchestrator | 2026-03-09 00:36:43.977716 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-09 00:36:43.977729 | orchestrator | Monday 09 March 2026 00:36:29 +0000 (0:00:01.289) 0:00:05.969 ********** 2026-03-09 00:36:43.977743 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:36:43.977755 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:36:43.977768 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:36:43.977780 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:36:43.977792 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:36:43.977804 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:36:43.977816 | orchestrator | 2026-03-09 00:36:43.977829 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-09 00:36:43.977841 | orchestrator | Monday 09 March 2026 00:36:33 +0000 (0:00:03.928) 0:00:09.897 ********** 2026-03-09 00:36:43.977854 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:36:43.977866 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:36:43.977879 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:36:43.977891 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:36:43.977903 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:36:43.977917 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:36:43.977929 | orchestrator | 2026-03-09 00:36:43.977951 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-09 00:36:43.977963 | orchestrator | 2026-03-09 00:36:43.977974 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-09 00:36:43.977985 | orchestrator | Monday 09 March 2026 00:36:33 +0000 (0:00:00.652) 0:00:10.550 ********** 2026-03-09 00:36:43.978004 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:36:43.978078 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:36:43.978093 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:36:43.978104 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:36:43.978115 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:36:43.978126 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:36:43.978136 | orchestrator | changed: [testbed-manager] 2026-03-09 00:36:43.978147 | orchestrator | 2026-03-09 00:36:43.978158 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-09 00:36:43.978169 | orchestrator | Monday 09 March 2026 00:36:35 +0000 (0:00:01.570) 0:00:12.120 ********** 2026-03-09 00:36:43.978180 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:36:43.978191 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:36:43.978201 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:36:43.978212 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:36:43.978223 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:36:43.978234 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:36:43.978263 | orchestrator | changed: [testbed-manager] 2026-03-09 00:36:43.978275 | orchestrator | 2026-03-09 00:36:43.978286 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-09 00:36:43.978297 | orchestrator | Monday 09 March 2026 00:36:36 +0000 (0:00:01.516) 0:00:13.637 ********** 2026-03-09 00:36:43.978308 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:43.978319 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:43.978330 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:43.978341 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:43.978352 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:43.978363 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:43.978373 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:43.978384 | orchestrator | 2026-03-09 00:36:43.978395 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-09 00:36:43.978406 | orchestrator | Monday 09 March 2026 00:36:38 +0000 (0:00:01.566) 0:00:15.204 ********** 2026-03-09 00:36:43.978417 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:36:43.978428 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:36:43.978439 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:36:43.978450 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:36:43.978461 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:36:43.978471 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:36:43.978482 | orchestrator | changed: [testbed-manager] 2026-03-09 00:36:43.978493 | orchestrator | 2026-03-09 00:36:43.978504 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-09 00:36:43.978515 | orchestrator | Monday 09 March 2026 00:36:40 +0000 (0:00:01.906) 0:00:17.111 ********** 2026-03-09 00:36:43.978526 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:36:43.978537 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:36:43.978547 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:36:43.978603 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:36:43.978615 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:36:43.978626 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:36:43.978636 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:36:43.978647 | orchestrator | 2026-03-09 00:36:43.978658 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-09 00:36:43.978669 | orchestrator | 2026-03-09 00:36:43.978680 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-09 00:36:43.978691 | orchestrator | Monday 09 March 2026 00:36:41 +0000 (0:00:00.617) 0:00:17.728 ********** 2026-03-09 00:36:43.978702 | orchestrator | ok: [testbed-manager] 2026-03-09 00:36:43.978713 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:36:43.978724 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:36:43.978734 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:36:43.978745 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:36:43.978756 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:36:43.978766 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:36:43.978787 | orchestrator | 2026-03-09 00:36:43.978798 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:36:43.978810 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:36:43.978822 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:36:43.978834 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:36:43.978845 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:36:43.978856 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:36:43.978867 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:36:43.978878 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:36:43.978889 | orchestrator | 2026-03-09 00:36:43.978900 | orchestrator | 2026-03-09 00:36:43.978911 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:36:43.978923 | orchestrator | Monday 09 March 2026 00:36:43 +0000 (0:00:02.934) 0:00:20.662 ********** 2026-03-09 00:36:43.978934 | orchestrator | =============================================================================== 2026-03-09 00:36:43.978950 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.93s 2026-03-09 00:36:43.978961 | orchestrator | Install python3-docker -------------------------------------------------- 2.93s 2026-03-09 00:36:43.978972 | orchestrator | Apply netplan configuration --------------------------------------------- 2.03s 2026-03-09 00:36:43.978983 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.91s 2026-03-09 00:36:43.978994 | orchestrator | Apply netplan configuration --------------------------------------------- 1.72s 2026-03-09 00:36:43.979005 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.57s 2026-03-09 00:36:43.979100 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.57s 2026-03-09 00:36:43.979115 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.52s 2026-03-09 00:36:43.979126 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.29s 2026-03-09 00:36:43.979136 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.80s 2026-03-09 00:36:43.979148 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.65s 2026-03-09 00:36:43.979167 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.62s 2026-03-09 00:36:44.499018 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-09 00:36:56.571051 | orchestrator | 2026-03-09 00:36:56 | INFO  | Prepare task for execution of reboot. 2026-03-09 00:36:56.641645 | orchestrator | 2026-03-09 00:36:56 | INFO  | Task f2cd9d3b-ee0d-4edd-94f1-d1de5395588e (reboot) was prepared for execution. 2026-03-09 00:36:56.641744 | orchestrator | 2026-03-09 00:36:56 | INFO  | It takes a moment until task f2cd9d3b-ee0d-4edd-94f1-d1de5395588e (reboot) has been started and output is visible here. 2026-03-09 00:37:06.754200 | orchestrator | 2026-03-09 00:37:06.754335 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-09 00:37:06.754353 | orchestrator | 2026-03-09 00:37:06.754365 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-09 00:37:06.754377 | orchestrator | Monday 09 March 2026 00:37:00 +0000 (0:00:00.201) 0:00:00.201 ********** 2026-03-09 00:37:06.754409 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:37:06.754422 | orchestrator | 2026-03-09 00:37:06.754433 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-09 00:37:06.754444 | orchestrator | Monday 09 March 2026 00:37:00 +0000 (0:00:00.097) 0:00:00.299 ********** 2026-03-09 00:37:06.754455 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:37:06.754466 | orchestrator | 2026-03-09 00:37:06.754477 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-09 00:37:06.754488 | orchestrator | Monday 09 March 2026 00:37:01 +0000 (0:00:01.007) 0:00:01.306 ********** 2026-03-09 00:37:06.754498 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:37:06.754509 | orchestrator | 2026-03-09 00:37:06.754520 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-09 00:37:06.754531 | orchestrator | 2026-03-09 00:37:06.754542 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-09 00:37:06.754553 | orchestrator | Monday 09 March 2026 00:37:01 +0000 (0:00:00.111) 0:00:01.417 ********** 2026-03-09 00:37:06.754643 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:37:06.754654 | orchestrator | 2026-03-09 00:37:06.754665 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-09 00:37:06.754676 | orchestrator | Monday 09 March 2026 00:37:02 +0000 (0:00:00.094) 0:00:01.512 ********** 2026-03-09 00:37:06.754687 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:37:06.754698 | orchestrator | 2026-03-09 00:37:06.754709 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-09 00:37:06.754719 | orchestrator | Monday 09 March 2026 00:37:02 +0000 (0:00:00.671) 0:00:02.183 ********** 2026-03-09 00:37:06.754731 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:37:06.754742 | orchestrator | 2026-03-09 00:37:06.754752 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-09 00:37:06.754764 | orchestrator | 2026-03-09 00:37:06.754774 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-09 00:37:06.754785 | orchestrator | Monday 09 March 2026 00:37:02 +0000 (0:00:00.157) 0:00:02.341 ********** 2026-03-09 00:37:06.754796 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:37:06.754806 | orchestrator | 2026-03-09 00:37:06.754817 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-09 00:37:06.754828 | orchestrator | Monday 09 March 2026 00:37:03 +0000 (0:00:00.199) 0:00:02.540 ********** 2026-03-09 00:37:06.754843 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:37:06.754862 | orchestrator | 2026-03-09 00:37:06.754881 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-09 00:37:06.754900 | orchestrator | Monday 09 March 2026 00:37:03 +0000 (0:00:00.671) 0:00:03.212 ********** 2026-03-09 00:37:06.754919 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:37:06.754935 | orchestrator | 2026-03-09 00:37:06.754947 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-09 00:37:06.754957 | orchestrator | 2026-03-09 00:37:06.754968 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-09 00:37:06.754980 | orchestrator | Monday 09 March 2026 00:37:03 +0000 (0:00:00.105) 0:00:03.318 ********** 2026-03-09 00:37:06.754999 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:37:06.755018 | orchestrator | 2026-03-09 00:37:06.755036 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-09 00:37:06.755055 | orchestrator | Monday 09 March 2026 00:37:03 +0000 (0:00:00.097) 0:00:03.416 ********** 2026-03-09 00:37:06.755074 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:37:06.755101 | orchestrator | 2026-03-09 00:37:06.755123 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-09 00:37:06.755158 | orchestrator | Monday 09 March 2026 00:37:04 +0000 (0:00:00.662) 0:00:04.078 ********** 2026-03-09 00:37:06.755176 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:37:06.755195 | orchestrator | 2026-03-09 00:37:06.755214 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-09 00:37:06.755246 | orchestrator | 2026-03-09 00:37:06.755266 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-09 00:37:06.755278 | orchestrator | Monday 09 March 2026 00:37:04 +0000 (0:00:00.119) 0:00:04.198 ********** 2026-03-09 00:37:06.755289 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:37:06.755300 | orchestrator | 2026-03-09 00:37:06.755311 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-09 00:37:06.755321 | orchestrator | Monday 09 March 2026 00:37:04 +0000 (0:00:00.088) 0:00:04.287 ********** 2026-03-09 00:37:06.755332 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:37:06.755343 | orchestrator | 2026-03-09 00:37:06.755354 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-09 00:37:06.755365 | orchestrator | Monday 09 March 2026 00:37:05 +0000 (0:00:00.655) 0:00:04.943 ********** 2026-03-09 00:37:06.755375 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:37:06.755386 | orchestrator | 2026-03-09 00:37:06.755397 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-09 00:37:06.755408 | orchestrator | 2026-03-09 00:37:06.755419 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-09 00:37:06.755429 | orchestrator | Monday 09 March 2026 00:37:05 +0000 (0:00:00.107) 0:00:05.051 ********** 2026-03-09 00:37:06.755440 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:37:06.755451 | orchestrator | 2026-03-09 00:37:06.755462 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-09 00:37:06.755473 | orchestrator | Monday 09 March 2026 00:37:05 +0000 (0:00:00.097) 0:00:05.148 ********** 2026-03-09 00:37:06.755483 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:37:06.755494 | orchestrator | 2026-03-09 00:37:06.755505 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-09 00:37:06.755516 | orchestrator | Monday 09 March 2026 00:37:06 +0000 (0:00:00.673) 0:00:05.821 ********** 2026-03-09 00:37:06.755546 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:37:06.755587 | orchestrator | 2026-03-09 00:37:06.755600 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:37:06.755612 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:37:06.755624 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:37:06.755635 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:37:06.755645 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:37:06.755656 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:37:06.755667 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:37:06.755677 | orchestrator | 2026-03-09 00:37:06.755688 | orchestrator | 2026-03-09 00:37:06.755699 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:37:06.755710 | orchestrator | Monday 09 March 2026 00:37:06 +0000 (0:00:00.043) 0:00:05.864 ********** 2026-03-09 00:37:06.755720 | orchestrator | =============================================================================== 2026-03-09 00:37:06.755734 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.34s 2026-03-09 00:37:06.755752 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.68s 2026-03-09 00:37:06.755771 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.65s 2026-03-09 00:37:07.044508 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-09 00:37:19.265305 | orchestrator | 2026-03-09 00:37:19 | INFO  | Prepare task for execution of wait-for-connection. 2026-03-09 00:37:19.333695 | orchestrator | 2026-03-09 00:37:19 | INFO  | Task 8d7479d6-75b8-4884-9d78-0a931fefd4c0 (wait-for-connection) was prepared for execution. 2026-03-09 00:37:19.333808 | orchestrator | 2026-03-09 00:37:19 | INFO  | It takes a moment until task 8d7479d6-75b8-4884-9d78-0a931fefd4c0 (wait-for-connection) has been started and output is visible here. 2026-03-09 00:37:34.994282 | orchestrator | 2026-03-09 00:37:34.994412 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-09 00:37:34.994430 | orchestrator | 2026-03-09 00:37:34.994443 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-09 00:37:34.994454 | orchestrator | Monday 09 March 2026 00:37:23 +0000 (0:00:00.166) 0:00:00.166 ********** 2026-03-09 00:37:34.994465 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:37:34.994477 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:37:34.994489 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:37:34.994500 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:37:34.994511 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:37:34.994522 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:37:34.994533 | orchestrator | 2026-03-09 00:37:34.994544 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:37:34.994601 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:37:34.994616 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:37:34.994628 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:37:34.994638 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:37:34.994649 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:37:34.994660 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:37:34.994671 | orchestrator | 2026-03-09 00:37:34.994682 | orchestrator | 2026-03-09 00:37:34.994693 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:37:34.994704 | orchestrator | Monday 09 March 2026 00:37:34 +0000 (0:00:11.460) 0:00:11.627 ********** 2026-03-09 00:37:34.994714 | orchestrator | =============================================================================== 2026-03-09 00:37:34.994725 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.46s 2026-03-09 00:37:35.278738 | orchestrator | + osism apply hddtemp 2026-03-09 00:37:47.250871 | orchestrator | 2026-03-09 00:37:47 | INFO  | Prepare task for execution of hddtemp. 2026-03-09 00:37:47.332867 | orchestrator | 2026-03-09 00:37:47 | INFO  | Task 893d10d2-1ea4-4d55-9d88-40bfeaa5ed52 (hddtemp) was prepared for execution. 2026-03-09 00:37:47.334357 | orchestrator | 2026-03-09 00:37:47 | INFO  | It takes a moment until task 893d10d2-1ea4-4d55-9d88-40bfeaa5ed52 (hddtemp) has been started and output is visible here. 2026-03-09 00:38:16.088043 | orchestrator | 2026-03-09 00:38:16.088125 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-09 00:38:16.088133 | orchestrator | 2026-03-09 00:38:16.088138 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-09 00:38:16.088144 | orchestrator | Monday 09 March 2026 00:37:51 +0000 (0:00:00.248) 0:00:00.248 ********** 2026-03-09 00:38:16.088149 | orchestrator | ok: [testbed-manager] 2026-03-09 00:38:16.088155 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:38:16.088176 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:38:16.088181 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:38:16.088186 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:38:16.088192 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:38:16.088197 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:38:16.088201 | orchestrator | 2026-03-09 00:38:16.088207 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-09 00:38:16.088211 | orchestrator | Monday 09 March 2026 00:37:52 +0000 (0:00:00.700) 0:00:00.949 ********** 2026-03-09 00:38:16.088218 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:38:16.088225 | orchestrator | 2026-03-09 00:38:16.088230 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-09 00:38:16.088236 | orchestrator | Monday 09 March 2026 00:37:53 +0000 (0:00:01.182) 0:00:02.131 ********** 2026-03-09 00:38:16.088243 | orchestrator | ok: [testbed-manager] 2026-03-09 00:38:16.088251 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:38:16.088258 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:38:16.088266 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:38:16.088273 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:38:16.088280 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:38:16.088287 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:38:16.088294 | orchestrator | 2026-03-09 00:38:16.088301 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-09 00:38:16.088308 | orchestrator | Monday 09 March 2026 00:37:55 +0000 (0:00:02.032) 0:00:04.164 ********** 2026-03-09 00:38:16.088315 | orchestrator | changed: [testbed-manager] 2026-03-09 00:38:16.088323 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:38:16.088330 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:38:16.088338 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:38:16.088345 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:38:16.088353 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:38:16.088360 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:38:16.088367 | orchestrator | 2026-03-09 00:38:16.088376 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-09 00:38:16.088381 | orchestrator | Monday 09 March 2026 00:37:56 +0000 (0:00:01.223) 0:00:05.388 ********** 2026-03-09 00:38:16.088385 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:38:16.088390 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:38:16.088394 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:38:16.088398 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:38:16.088403 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:38:16.088407 | orchestrator | ok: [testbed-manager] 2026-03-09 00:38:16.088412 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:38:16.088417 | orchestrator | 2026-03-09 00:38:16.088421 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-09 00:38:16.088426 | orchestrator | Monday 09 March 2026 00:37:57 +0000 (0:00:01.165) 0:00:06.553 ********** 2026-03-09 00:38:16.088431 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:38:16.088435 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:38:16.088440 | orchestrator | changed: [testbed-manager] 2026-03-09 00:38:16.088445 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:38:16.088449 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:38:16.088453 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:38:16.088458 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:38:16.088462 | orchestrator | 2026-03-09 00:38:16.088477 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-09 00:38:16.088482 | orchestrator | Monday 09 March 2026 00:37:58 +0000 (0:00:00.840) 0:00:07.394 ********** 2026-03-09 00:38:16.088487 | orchestrator | changed: [testbed-manager] 2026-03-09 00:38:16.088491 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:38:16.088495 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:38:16.088507 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:38:16.088512 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:38:16.088516 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:38:16.088521 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:38:16.088525 | orchestrator | 2026-03-09 00:38:16.088530 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-09 00:38:16.088534 | orchestrator | Monday 09 March 2026 00:38:12 +0000 (0:00:13.928) 0:00:21.322 ********** 2026-03-09 00:38:16.088539 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:38:16.088580 | orchestrator | 2026-03-09 00:38:16.088586 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-09 00:38:16.088590 | orchestrator | Monday 09 March 2026 00:38:13 +0000 (0:00:01.189) 0:00:22.511 ********** 2026-03-09 00:38:16.088595 | orchestrator | changed: [testbed-manager] 2026-03-09 00:38:16.088599 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:38:16.088604 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:38:16.088608 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:38:16.088613 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:38:16.088618 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:38:16.088623 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:38:16.088629 | orchestrator | 2026-03-09 00:38:16.088634 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:38:16.088640 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:38:16.088659 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:38:16.088665 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:38:16.088671 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:38:16.088676 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:38:16.088682 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:38:16.088687 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:38:16.088693 | orchestrator | 2026-03-09 00:38:16.088699 | orchestrator | 2026-03-09 00:38:16.088704 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:38:16.088710 | orchestrator | Monday 09 March 2026 00:38:15 +0000 (0:00:01.916) 0:00:24.427 ********** 2026-03-09 00:38:16.088716 | orchestrator | =============================================================================== 2026-03-09 00:38:16.088721 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.93s 2026-03-09 00:38:16.088727 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.03s 2026-03-09 00:38:16.088732 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.92s 2026-03-09 00:38:16.088738 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.22s 2026-03-09 00:38:16.088744 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.19s 2026-03-09 00:38:16.088749 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.18s 2026-03-09 00:38:16.088755 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.17s 2026-03-09 00:38:16.088764 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.84s 2026-03-09 00:38:16.088770 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.70s 2026-03-09 00:38:16.375618 | orchestrator | ++ semver latest 7.1.1 2026-03-09 00:38:16.423542 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-09 00:38:16.423674 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-09 00:38:16.423690 | orchestrator | + sudo systemctl restart manager.service 2026-03-09 00:38:56.359032 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-09 00:38:56.359178 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-09 00:38:56.359201 | orchestrator | + local max_attempts=60 2026-03-09 00:38:56.359214 | orchestrator | + local name=ceph-ansible 2026-03-09 00:38:56.359226 | orchestrator | + local attempt_num=1 2026-03-09 00:38:56.359237 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:38:56.393670 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:38:56.393766 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:38:56.393781 | orchestrator | + sleep 5 2026-03-09 00:39:01.398604 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:39:01.427640 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:39:01.427744 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:39:01.427759 | orchestrator | + sleep 5 2026-03-09 00:39:06.431207 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:39:06.471729 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:39:06.471815 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:39:06.471830 | orchestrator | + sleep 5 2026-03-09 00:39:11.476993 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:39:11.513237 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:39:11.513326 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:39:11.513342 | orchestrator | + sleep 5 2026-03-09 00:39:16.519099 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:39:16.558195 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:39:16.558287 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:39:16.558302 | orchestrator | + sleep 5 2026-03-09 00:39:21.563121 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:39:21.598811 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:39:21.598904 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:39:21.598918 | orchestrator | + sleep 5 2026-03-09 00:39:26.603572 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:39:26.655202 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:39:26.655285 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:39:26.655300 | orchestrator | + sleep 5 2026-03-09 00:39:31.660279 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:39:31.726750 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-09 00:39:31.726846 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:39:31.726862 | orchestrator | + sleep 5 2026-03-09 00:39:36.730283 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:39:36.779993 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-09 00:39:36.780092 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:39:36.780109 | orchestrator | + sleep 5 2026-03-09 00:39:41.783837 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:39:41.826168 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-09 00:39:41.826261 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:39:41.826271 | orchestrator | + sleep 5 2026-03-09 00:39:46.830750 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:39:46.869181 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-09 00:39:46.869321 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:39:46.869340 | orchestrator | + sleep 5 2026-03-09 00:39:51.873873 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:39:51.915779 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-09 00:39:51.915887 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:39:51.915898 | orchestrator | + sleep 5 2026-03-09 00:39:56.920961 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:39:56.958726 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-09 00:39:56.958836 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-09 00:39:56.958856 | orchestrator | + sleep 5 2026-03-09 00:40:01.964119 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-09 00:40:02.003346 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:40:02.003449 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-09 00:40:02.003466 | orchestrator | + local max_attempts=60 2026-03-09 00:40:02.003479 | orchestrator | + local name=kolla-ansible 2026-03-09 00:40:02.003490 | orchestrator | + local attempt_num=1 2026-03-09 00:40:02.004153 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-09 00:40:02.029721 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:40:02.029992 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-09 00:40:02.030066 | orchestrator | + local max_attempts=60 2026-03-09 00:40:02.030090 | orchestrator | + local name=osism-ansible 2026-03-09 00:40:02.030109 | orchestrator | + local attempt_num=1 2026-03-09 00:40:02.030148 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-09 00:40:02.053939 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-09 00:40:02.054099 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-09 00:40:02.054119 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-09 00:40:02.220788 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-09 00:40:02.364131 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-09 00:40:02.675900 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-09 00:40:02.676996 | orchestrator | + osism apply gather-facts 2026-03-09 00:40:14.761594 | orchestrator | 2026-03-09 00:40:14 | INFO  | Prepare task for execution of gather-facts. 2026-03-09 00:40:14.824040 | orchestrator | 2026-03-09 00:40:14 | INFO  | Task 30cf58d6-00b1-416e-baeb-b3ad3c2b1753 (gather-facts) was prepared for execution. 2026-03-09 00:40:14.824131 | orchestrator | 2026-03-09 00:40:14 | INFO  | It takes a moment until task 30cf58d6-00b1-416e-baeb-b3ad3c2b1753 (gather-facts) has been started and output is visible here. 2026-03-09 00:40:28.544366 | orchestrator | 2026-03-09 00:40:28.544490 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-09 00:40:28.544513 | orchestrator | 2026-03-09 00:40:28.544611 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-09 00:40:28.544624 | orchestrator | Monday 09 March 2026 00:40:19 +0000 (0:00:00.223) 0:00:00.223 ********** 2026-03-09 00:40:28.544633 | orchestrator | ok: [testbed-manager] 2026-03-09 00:40:28.544643 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:40:28.544652 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:40:28.544660 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:40:28.544669 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:40:28.544677 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:40:28.544685 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:40:28.544694 | orchestrator | 2026-03-09 00:40:28.544703 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-09 00:40:28.544711 | orchestrator | 2026-03-09 00:40:28.544719 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-09 00:40:28.544728 | orchestrator | Monday 09 March 2026 00:40:27 +0000 (0:00:08.338) 0:00:08.561 ********** 2026-03-09 00:40:28.544737 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:40:28.544747 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:40:28.544755 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:40:28.544764 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:40:28.544772 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:40:28.544780 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:40:28.544789 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:40:28.544797 | orchestrator | 2026-03-09 00:40:28.544806 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:40:28.544814 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:40:28.544872 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:40:28.544882 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:40:28.544891 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:40:28.544900 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:40:28.544908 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:40:28.544919 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 00:40:28.544929 | orchestrator | 2026-03-09 00:40:28.544938 | orchestrator | 2026-03-09 00:40:28.544948 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:40:28.544959 | orchestrator | Monday 09 March 2026 00:40:28 +0000 (0:00:00.595) 0:00:09.157 ********** 2026-03-09 00:40:28.544968 | orchestrator | =============================================================================== 2026-03-09 00:40:28.544978 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.34s 2026-03-09 00:40:28.544988 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.60s 2026-03-09 00:40:28.829672 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-09 00:40:28.843575 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-09 00:40:28.853911 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-09 00:40:28.865377 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-09 00:40:28.880064 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-09 00:40:28.889791 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-09 00:40:28.901062 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-09 00:40:28.911961 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-09 00:40:28.929465 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-09 00:40:28.940177 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-09 00:40:28.951486 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-09 00:40:28.977558 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-09 00:40:28.990859 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-09 00:40:29.000500 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-09 00:40:29.015894 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-09 00:40:29.030203 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-09 00:40:29.048178 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-09 00:40:29.061917 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-09 00:40:29.071626 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-09 00:40:29.080513 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-09 00:40:29.090761 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-09 00:40:29.101357 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-09 00:40:29.116868 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-09 00:40:29.128909 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-09 00:40:29.310480 | orchestrator | ok: Runtime: 0:24:51.282632 2026-03-09 00:40:29.416090 | 2026-03-09 00:40:29.416312 | TASK [Deploy services] 2026-03-09 00:40:29.950695 | orchestrator | skipping: Conditional result was False 2026-03-09 00:40:29.969427 | 2026-03-09 00:40:29.969601 | TASK [Deploy in a nutshell] 2026-03-09 00:40:30.776146 | orchestrator | + set -e 2026-03-09 00:40:30.776333 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-09 00:40:30.776358 | orchestrator | ++ export INTERACTIVE=false 2026-03-09 00:40:30.776380 | orchestrator | ++ INTERACTIVE=false 2026-03-09 00:40:30.776394 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-09 00:40:30.776407 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-09 00:40:30.776420 | orchestrator | + source /opt/manager-vars.sh 2026-03-09 00:40:30.776465 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-09 00:40:30.776493 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-09 00:40:30.776508 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-09 00:40:30.776554 | orchestrator | ++ CEPH_VERSION=reef 2026-03-09 00:40:30.776568 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-09 00:40:30.776590 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-09 00:40:30.776610 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-09 00:40:30.776643 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-09 00:40:30.776667 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-09 00:40:30.776724 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-09 00:40:30.776744 | orchestrator | ++ export ARA=false 2026-03-09 00:40:30.776763 | orchestrator | ++ ARA=false 2026-03-09 00:40:30.776781 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-09 00:40:30.776805 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-09 00:40:30.776823 | orchestrator | ++ export TEMPEST=true 2026-03-09 00:40:30.776842 | orchestrator | ++ TEMPEST=true 2026-03-09 00:40:30.776860 | orchestrator | ++ export IS_ZUUL=true 2026-03-09 00:40:30.776875 | orchestrator | ++ IS_ZUUL=true 2026-03-09 00:40:30.776886 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.100 2026-03-09 00:40:30.776950 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.100 2026-03-09 00:40:30.776972 | orchestrator | ++ export EXTERNAL_API=false 2026-03-09 00:40:30.776991 | orchestrator | ++ EXTERNAL_API=false 2026-03-09 00:40:30.777008 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-09 00:40:30.777027 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-09 00:40:30.777045 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-09 00:40:30.777063 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-09 00:40:30.777084 | orchestrator | 2026-03-09 00:40:30.777102 | orchestrator | # PULL IMAGES 2026-03-09 00:40:30.777121 | orchestrator | 2026-03-09 00:40:30.777136 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-09 00:40:30.777164 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-09 00:40:30.777183 | orchestrator | + echo 2026-03-09 00:40:30.777201 | orchestrator | + echo '# PULL IMAGES' 2026-03-09 00:40:30.777221 | orchestrator | + echo 2026-03-09 00:40:30.777260 | orchestrator | ++ semver latest 7.0.0 2026-03-09 00:40:30.824719 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-09 00:40:30.824810 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-09 00:40:30.824825 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-09 00:40:32.784225 | orchestrator | 2026-03-09 00:40:32 | INFO  | Trying to run play pull-images in environment custom 2026-03-09 00:40:42.799585 | orchestrator | 2026-03-09 00:40:42 | INFO  | Prepare task for execution of pull-images. 2026-03-09 00:40:42.868505 | orchestrator | 2026-03-09 00:40:42 | INFO  | Task 45acd6cb-5906-450a-891f-5b1b81f071aa (pull-images) was prepared for execution. 2026-03-09 00:40:42.868653 | orchestrator | 2026-03-09 00:40:42 | INFO  | Task 45acd6cb-5906-450a-891f-5b1b81f071aa is running in background. No more output. Check ARA for logs. 2026-03-09 00:40:44.873089 | orchestrator | 2026-03-09 00:40:44 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-09 00:40:55.015666 | orchestrator | 2026-03-09 00:40:55 | INFO  | Prepare task for execution of wipe-partitions. 2026-03-09 00:40:55.095108 | orchestrator | 2026-03-09 00:40:55 | INFO  | Task 01e82d1b-4ba8-4750-9e36-ab642515be9e (wipe-partitions) was prepared for execution. 2026-03-09 00:40:55.095168 | orchestrator | 2026-03-09 00:40:55 | INFO  | It takes a moment until task 01e82d1b-4ba8-4750-9e36-ab642515be9e (wipe-partitions) has been started and output is visible here. 2026-03-09 00:41:08.168494 | orchestrator | 2026-03-09 00:41:08.168671 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-09 00:41:08.168693 | orchestrator | 2026-03-09 00:41:08.168708 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-09 00:41:08.168731 | orchestrator | Monday 09 March 2026 00:40:59 +0000 (0:00:00.137) 0:00:00.137 ********** 2026-03-09 00:41:08.168778 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:41:08.168795 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:41:08.168809 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:41:08.168824 | orchestrator | 2026-03-09 00:41:08.168837 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-09 00:41:08.168850 | orchestrator | Monday 09 March 2026 00:41:00 +0000 (0:00:00.599) 0:00:00.736 ********** 2026-03-09 00:41:08.168870 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:08.168884 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:41:08.168899 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:41:08.168913 | orchestrator | 2026-03-09 00:41:08.168927 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-09 00:41:08.168941 | orchestrator | Monday 09 March 2026 00:41:00 +0000 (0:00:00.354) 0:00:01.091 ********** 2026-03-09 00:41:08.168956 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:41:08.168970 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:41:08.168984 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:41:08.168997 | orchestrator | 2026-03-09 00:41:08.169010 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-09 00:41:08.169024 | orchestrator | Monday 09 March 2026 00:41:01 +0000 (0:00:00.565) 0:00:01.656 ********** 2026-03-09 00:41:08.169038 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:08.169052 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:41:08.169065 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:41:08.169080 | orchestrator | 2026-03-09 00:41:08.169093 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-09 00:41:08.169107 | orchestrator | Monday 09 March 2026 00:41:01 +0000 (0:00:00.260) 0:00:01.917 ********** 2026-03-09 00:41:08.169122 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-09 00:41:08.169142 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-09 00:41:08.169158 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-09 00:41:08.169173 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-09 00:41:08.169186 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-09 00:41:08.169200 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-09 00:41:08.169213 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-09 00:41:08.169226 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-09 00:41:08.169238 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-09 00:41:08.169251 | orchestrator | 2026-03-09 00:41:08.169264 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-09 00:41:08.169278 | orchestrator | Monday 09 March 2026 00:41:02 +0000 (0:00:01.288) 0:00:03.205 ********** 2026-03-09 00:41:08.169291 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-09 00:41:08.169303 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-09 00:41:08.169317 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-09 00:41:08.169331 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-09 00:41:08.169344 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-09 00:41:08.169359 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-09 00:41:08.169371 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-09 00:41:08.169382 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-09 00:41:08.169396 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-09 00:41:08.169409 | orchestrator | 2026-03-09 00:41:08.169431 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-09 00:41:08.169447 | orchestrator | Monday 09 March 2026 00:41:04 +0000 (0:00:01.509) 0:00:04.715 ********** 2026-03-09 00:41:08.169461 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-09 00:41:08.169474 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-09 00:41:08.169489 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-09 00:41:08.169503 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-09 00:41:08.169557 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-09 00:41:08.169573 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-09 00:41:08.169588 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-09 00:41:08.169602 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-09 00:41:08.169616 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-09 00:41:08.169632 | orchestrator | 2026-03-09 00:41:08.169647 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-09 00:41:08.169662 | orchestrator | Monday 09 March 2026 00:41:06 +0000 (0:00:02.190) 0:00:06.906 ********** 2026-03-09 00:41:08.169677 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:41:08.169693 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:41:08.169708 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:41:08.169723 | orchestrator | 2026-03-09 00:41:08.169740 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-09 00:41:08.169755 | orchestrator | Monday 09 March 2026 00:41:07 +0000 (0:00:00.656) 0:00:07.563 ********** 2026-03-09 00:41:08.169770 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:41:08.169786 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:41:08.169801 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:41:08.169818 | orchestrator | 2026-03-09 00:41:08.169833 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:41:08.169848 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:41:08.169863 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:41:08.169903 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:41:08.169918 | orchestrator | 2026-03-09 00:41:08.169934 | orchestrator | 2026-03-09 00:41:08.169950 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:41:08.169966 | orchestrator | Monday 09 March 2026 00:41:07 +0000 (0:00:00.653) 0:00:08.216 ********** 2026-03-09 00:41:08.169981 | orchestrator | =============================================================================== 2026-03-09 00:41:08.169998 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.19s 2026-03-09 00:41:08.170084 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.51s 2026-03-09 00:41:08.170101 | orchestrator | Check device availability ----------------------------------------------- 1.29s 2026-03-09 00:41:08.170115 | orchestrator | Reload udev rules ------------------------------------------------------- 0.66s 2026-03-09 00:41:08.170129 | orchestrator | Request device events from the kernel ----------------------------------- 0.65s 2026-03-09 00:41:08.170144 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.60s 2026-03-09 00:41:08.170159 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.57s 2026-03-09 00:41:08.170171 | orchestrator | Remove all rook related logical devices --------------------------------- 0.35s 2026-03-09 00:41:08.170184 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.26s 2026-03-09 00:41:20.489300 | orchestrator | 2026-03-09 00:41:20 | INFO  | Prepare task for execution of facts. 2026-03-09 00:41:20.568160 | orchestrator | 2026-03-09 00:41:20 | INFO  | Task 01cb1ea1-98e0-4d58-83f6-7dbed953d449 (facts) was prepared for execution. 2026-03-09 00:41:20.568230 | orchestrator | 2026-03-09 00:41:20 | INFO  | It takes a moment until task 01cb1ea1-98e0-4d58-83f6-7dbed953d449 (facts) has been started and output is visible here. 2026-03-09 00:41:33.440725 | orchestrator | 2026-03-09 00:41:33.440849 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-09 00:41:33.440869 | orchestrator | 2026-03-09 00:41:33.440910 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-09 00:41:33.440922 | orchestrator | Monday 09 March 2026 00:41:25 +0000 (0:00:00.258) 0:00:00.258 ********** 2026-03-09 00:41:33.440934 | orchestrator | ok: [testbed-manager] 2026-03-09 00:41:33.440946 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:41:33.440957 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:41:33.440967 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:41:33.440978 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:41:33.440989 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:41:33.441000 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:41:33.441010 | orchestrator | 2026-03-09 00:41:33.441021 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-09 00:41:33.441032 | orchestrator | Monday 09 March 2026 00:41:26 +0000 (0:00:01.133) 0:00:01.392 ********** 2026-03-09 00:41:33.441043 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:41:33.441055 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:41:33.441066 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:41:33.441077 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:41:33.441089 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:33.441108 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:41:33.441126 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:41:33.441144 | orchestrator | 2026-03-09 00:41:33.441161 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-09 00:41:33.441213 | orchestrator | 2026-03-09 00:41:33.441226 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-09 00:41:33.441238 | orchestrator | Monday 09 March 2026 00:41:27 +0000 (0:00:01.238) 0:00:02.631 ********** 2026-03-09 00:41:33.441249 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:41:33.441262 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:41:33.441280 | orchestrator | ok: [testbed-manager] 2026-03-09 00:41:33.441298 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:41:33.441311 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:41:33.441324 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:41:33.441338 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:41:33.441351 | orchestrator | 2026-03-09 00:41:33.441363 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-09 00:41:33.441376 | orchestrator | 2026-03-09 00:41:33.441388 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-09 00:41:33.441401 | orchestrator | Monday 09 March 2026 00:41:32 +0000 (0:00:05.222) 0:00:07.853 ********** 2026-03-09 00:41:33.441414 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:41:33.441426 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:41:33.441439 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:41:33.441451 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:41:33.441463 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:33.441475 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:41:33.441487 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:41:33.441500 | orchestrator | 2026-03-09 00:41:33.441512 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:41:33.441550 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:41:33.441565 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:41:33.441579 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:41:33.441592 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:41:33.441604 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:41:33.441627 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:41:33.441638 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:41:33.441649 | orchestrator | 2026-03-09 00:41:33.441660 | orchestrator | 2026-03-09 00:41:33.441670 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:41:33.441681 | orchestrator | Monday 09 March 2026 00:41:33 +0000 (0:00:00.496) 0:00:08.349 ********** 2026-03-09 00:41:33.441692 | orchestrator | =============================================================================== 2026-03-09 00:41:33.441703 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.22s 2026-03-09 00:41:33.441714 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.24s 2026-03-09 00:41:33.441725 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.13s 2026-03-09 00:41:33.441736 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-03-09 00:41:35.727975 | orchestrator | 2026-03-09 00:41:35 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-03-09 00:41:35.790793 | orchestrator | 2026-03-09 00:41:35 | INFO  | Task 53e98cd4-a9ce-4a3b-ab0e-08c0c0f8c127 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-09 00:41:35.790885 | orchestrator | 2026-03-09 00:41:35 | INFO  | It takes a moment until task 53e98cd4-a9ce-4a3b-ab0e-08c0c0f8c127 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-09 00:41:47.534229 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-09 00:41:47.534372 | orchestrator | 2.16.14 2026-03-09 00:41:47.534400 | orchestrator | 2026-03-09 00:41:47.534419 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-09 00:41:47.534439 | orchestrator | 2026-03-09 00:41:47.534458 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-09 00:41:47.534477 | orchestrator | Monday 09 March 2026 00:41:40 +0000 (0:00:00.309) 0:00:00.309 ********** 2026-03-09 00:41:47.534495 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-09 00:41:47.534513 | orchestrator | 2026-03-09 00:41:47.534562 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-09 00:41:47.534580 | orchestrator | Monday 09 March 2026 00:41:40 +0000 (0:00:00.236) 0:00:00.546 ********** 2026-03-09 00:41:47.534599 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:41:47.534618 | orchestrator | 2026-03-09 00:41:47.534637 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:41:47.534655 | orchestrator | Monday 09 March 2026 00:41:40 +0000 (0:00:00.246) 0:00:00.792 ********** 2026-03-09 00:41:47.534688 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-09 00:41:47.534701 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-09 00:41:47.534712 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-09 00:41:47.534723 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-09 00:41:47.534734 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-09 00:41:47.534745 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-09 00:41:47.534761 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-09 00:41:47.534779 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-09 00:41:47.534797 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-09 00:41:47.534814 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-09 00:41:47.534862 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-09 00:41:47.534884 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-09 00:41:47.534903 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-09 00:41:47.534920 | orchestrator | 2026-03-09 00:41:47.534936 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:41:47.534948 | orchestrator | Monday 09 March 2026 00:41:41 +0000 (0:00:00.494) 0:00:01.287 ********** 2026-03-09 00:41:47.534959 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:47.534970 | orchestrator | 2026-03-09 00:41:47.534981 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:41:47.534992 | orchestrator | Monday 09 March 2026 00:41:41 +0000 (0:00:00.203) 0:00:01.490 ********** 2026-03-09 00:41:47.535003 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:47.535014 | orchestrator | 2026-03-09 00:41:47.535025 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:41:47.535043 | orchestrator | Monday 09 March 2026 00:41:41 +0000 (0:00:00.184) 0:00:01.675 ********** 2026-03-09 00:41:47.535054 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:47.535065 | orchestrator | 2026-03-09 00:41:47.535075 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:41:47.535086 | orchestrator | Monday 09 March 2026 00:41:41 +0000 (0:00:00.193) 0:00:01.868 ********** 2026-03-09 00:41:47.535098 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:47.535109 | orchestrator | 2026-03-09 00:41:47.535120 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:41:47.535131 | orchestrator | Monday 09 March 2026 00:41:41 +0000 (0:00:00.197) 0:00:02.066 ********** 2026-03-09 00:41:47.535142 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:47.535153 | orchestrator | 2026-03-09 00:41:47.535164 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:41:47.535175 | orchestrator | Monday 09 March 2026 00:41:42 +0000 (0:00:00.204) 0:00:02.271 ********** 2026-03-09 00:41:47.535186 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:47.535197 | orchestrator | 2026-03-09 00:41:47.535207 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:41:47.535218 | orchestrator | Monday 09 March 2026 00:41:42 +0000 (0:00:00.209) 0:00:02.480 ********** 2026-03-09 00:41:47.535229 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:47.535240 | orchestrator | 2026-03-09 00:41:47.535251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:41:47.535262 | orchestrator | Monday 09 March 2026 00:41:42 +0000 (0:00:00.183) 0:00:02.664 ********** 2026-03-09 00:41:47.535273 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:47.535284 | orchestrator | 2026-03-09 00:41:47.535295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:41:47.535306 | orchestrator | Monday 09 March 2026 00:41:42 +0000 (0:00:00.200) 0:00:02.865 ********** 2026-03-09 00:41:47.535317 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa) 2026-03-09 00:41:47.535329 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa) 2026-03-09 00:41:47.535340 | orchestrator | 2026-03-09 00:41:47.535351 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:41:47.535382 | orchestrator | Monday 09 March 2026 00:41:43 +0000 (0:00:00.427) 0:00:03.293 ********** 2026-03-09 00:41:47.535394 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_26907958-5014-4e4e-aaae-f132ebc9345b) 2026-03-09 00:41:47.535405 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_26907958-5014-4e4e-aaae-f132ebc9345b) 2026-03-09 00:41:47.535416 | orchestrator | 2026-03-09 00:41:47.535434 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:41:47.535454 | orchestrator | Monday 09 March 2026 00:41:43 +0000 (0:00:00.667) 0:00:03.960 ********** 2026-03-09 00:41:47.535465 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_763f54df-2df6-4a17-b758-6e7498448fae) 2026-03-09 00:41:47.535476 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_763f54df-2df6-4a17-b758-6e7498448fae) 2026-03-09 00:41:47.535487 | orchestrator | 2026-03-09 00:41:47.535497 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:41:47.535508 | orchestrator | Monday 09 March 2026 00:41:44 +0000 (0:00:00.622) 0:00:04.583 ********** 2026-03-09 00:41:47.535598 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_49ad7546-ef2d-4696-ae5b-c2e2e05846ff) 2026-03-09 00:41:47.535612 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_49ad7546-ef2d-4696-ae5b-c2e2e05846ff) 2026-03-09 00:41:47.535623 | orchestrator | 2026-03-09 00:41:47.535634 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:41:47.535645 | orchestrator | Monday 09 March 2026 00:41:45 +0000 (0:00:00.887) 0:00:05.470 ********** 2026-03-09 00:41:47.535656 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-09 00:41:47.535667 | orchestrator | 2026-03-09 00:41:47.535678 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:41:47.535689 | orchestrator | Monday 09 March 2026 00:41:45 +0000 (0:00:00.339) 0:00:05.809 ********** 2026-03-09 00:41:47.535700 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-09 00:41:47.535710 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-09 00:41:47.535721 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-09 00:41:47.535732 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-09 00:41:47.535743 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-09 00:41:47.535753 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-09 00:41:47.535764 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-09 00:41:47.535775 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-09 00:41:47.535786 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-09 00:41:47.535797 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-09 00:41:47.535808 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-09 00:41:47.535819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-09 00:41:47.535829 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-09 00:41:47.535840 | orchestrator | 2026-03-09 00:41:47.535851 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:41:47.535862 | orchestrator | Monday 09 March 2026 00:41:46 +0000 (0:00:00.394) 0:00:06.204 ********** 2026-03-09 00:41:47.535873 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:47.535890 | orchestrator | 2026-03-09 00:41:47.535910 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:41:47.535929 | orchestrator | Monday 09 March 2026 00:41:46 +0000 (0:00:00.206) 0:00:06.410 ********** 2026-03-09 00:41:47.535947 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:47.535964 | orchestrator | 2026-03-09 00:41:47.535982 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:41:47.535999 | orchestrator | Monday 09 March 2026 00:41:46 +0000 (0:00:00.198) 0:00:06.609 ********** 2026-03-09 00:41:47.536018 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:47.536049 | orchestrator | 2026-03-09 00:41:47.536069 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:41:47.536090 | orchestrator | Monday 09 March 2026 00:41:46 +0000 (0:00:00.198) 0:00:06.808 ********** 2026-03-09 00:41:47.536109 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:47.536128 | orchestrator | 2026-03-09 00:41:47.536147 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:41:47.536167 | orchestrator | Monday 09 March 2026 00:41:46 +0000 (0:00:00.212) 0:00:07.020 ********** 2026-03-09 00:41:47.536187 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:47.536207 | orchestrator | 2026-03-09 00:41:47.536227 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:41:47.536247 | orchestrator | Monday 09 March 2026 00:41:47 +0000 (0:00:00.198) 0:00:07.219 ********** 2026-03-09 00:41:47.536267 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:47.536286 | orchestrator | 2026-03-09 00:41:47.536303 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:41:47.536323 | orchestrator | Monday 09 March 2026 00:41:47 +0000 (0:00:00.205) 0:00:07.424 ********** 2026-03-09 00:41:47.536343 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:47.536364 | orchestrator | 2026-03-09 00:41:47.536399 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:41:54.981447 | orchestrator | Monday 09 March 2026 00:41:47 +0000 (0:00:00.199) 0:00:07.623 ********** 2026-03-09 00:41:54.981565 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:54.981575 | orchestrator | 2026-03-09 00:41:54.981583 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:41:54.981591 | orchestrator | Monday 09 March 2026 00:41:47 +0000 (0:00:00.183) 0:00:07.806 ********** 2026-03-09 00:41:54.981599 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-09 00:41:54.981606 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-09 00:41:54.981613 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-09 00:41:54.981620 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-09 00:41:54.981628 | orchestrator | 2026-03-09 00:41:54.981635 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:41:54.981657 | orchestrator | Monday 09 March 2026 00:41:48 +0000 (0:00:00.960) 0:00:08.767 ********** 2026-03-09 00:41:54.981664 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:54.981671 | orchestrator | 2026-03-09 00:41:54.981678 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:41:54.981685 | orchestrator | Monday 09 March 2026 00:41:48 +0000 (0:00:00.216) 0:00:08.983 ********** 2026-03-09 00:41:54.981691 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:54.981698 | orchestrator | 2026-03-09 00:41:54.981705 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:41:54.981712 | orchestrator | Monday 09 March 2026 00:41:49 +0000 (0:00:00.207) 0:00:09.191 ********** 2026-03-09 00:41:54.981719 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:54.981726 | orchestrator | 2026-03-09 00:41:54.981734 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:41:54.981741 | orchestrator | Monday 09 March 2026 00:41:49 +0000 (0:00:00.202) 0:00:09.393 ********** 2026-03-09 00:41:54.981748 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:54.981756 | orchestrator | 2026-03-09 00:41:54.981763 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-09 00:41:54.981770 | orchestrator | Monday 09 March 2026 00:41:49 +0000 (0:00:00.208) 0:00:09.602 ********** 2026-03-09 00:41:54.981778 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-09 00:41:54.981784 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-09 00:41:54.981791 | orchestrator | 2026-03-09 00:41:54.981798 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-09 00:41:54.981804 | orchestrator | Monday 09 March 2026 00:41:49 +0000 (0:00:00.169) 0:00:09.771 ********** 2026-03-09 00:41:54.981829 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:54.981835 | orchestrator | 2026-03-09 00:41:54.981842 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-09 00:41:54.981849 | orchestrator | Monday 09 March 2026 00:41:49 +0000 (0:00:00.123) 0:00:09.895 ********** 2026-03-09 00:41:54.981856 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:54.981863 | orchestrator | 2026-03-09 00:41:54.981870 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-09 00:41:54.981876 | orchestrator | Monday 09 March 2026 00:41:49 +0000 (0:00:00.135) 0:00:10.030 ********** 2026-03-09 00:41:54.981884 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:54.981891 | orchestrator | 2026-03-09 00:41:54.981898 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-09 00:41:54.981905 | orchestrator | Monday 09 March 2026 00:41:50 +0000 (0:00:00.123) 0:00:10.154 ********** 2026-03-09 00:41:54.981912 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:41:54.981920 | orchestrator | 2026-03-09 00:41:54.981927 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-09 00:41:54.981934 | orchestrator | Monday 09 March 2026 00:41:50 +0000 (0:00:00.134) 0:00:10.289 ********** 2026-03-09 00:41:54.981943 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5d9cda85-a301-5b16-a7fe-308b162b7259'}}) 2026-03-09 00:41:54.981950 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8734b320-4ffe-530d-8e73-0aec819257b4'}}) 2026-03-09 00:41:54.981956 | orchestrator | 2026-03-09 00:41:54.981963 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-09 00:41:54.981970 | orchestrator | Monday 09 March 2026 00:41:50 +0000 (0:00:00.172) 0:00:10.461 ********** 2026-03-09 00:41:54.981977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5d9cda85-a301-5b16-a7fe-308b162b7259'}})  2026-03-09 00:41:54.981989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8734b320-4ffe-530d-8e73-0aec819257b4'}})  2026-03-09 00:41:54.982000 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:54.982008 | orchestrator | 2026-03-09 00:41:54.982039 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-09 00:41:54.982049 | orchestrator | Monday 09 March 2026 00:41:50 +0000 (0:00:00.158) 0:00:10.619 ********** 2026-03-09 00:41:54.982056 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5d9cda85-a301-5b16-a7fe-308b162b7259'}})  2026-03-09 00:41:54.982063 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8734b320-4ffe-530d-8e73-0aec819257b4'}})  2026-03-09 00:41:54.982070 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:54.982077 | orchestrator | 2026-03-09 00:41:54.982084 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-09 00:41:54.982091 | orchestrator | Monday 09 March 2026 00:41:50 +0000 (0:00:00.317) 0:00:10.937 ********** 2026-03-09 00:41:54.982098 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5d9cda85-a301-5b16-a7fe-308b162b7259'}})  2026-03-09 00:41:54.982119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8734b320-4ffe-530d-8e73-0aec819257b4'}})  2026-03-09 00:41:54.982127 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:54.982135 | orchestrator | 2026-03-09 00:41:54.982141 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-09 00:41:54.982149 | orchestrator | Monday 09 March 2026 00:41:50 +0000 (0:00:00.154) 0:00:11.092 ********** 2026-03-09 00:41:54.982156 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:41:54.982163 | orchestrator | 2026-03-09 00:41:54.982170 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-09 00:41:54.982177 | orchestrator | Monday 09 March 2026 00:41:51 +0000 (0:00:00.145) 0:00:11.237 ********** 2026-03-09 00:41:54.982184 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:41:54.982197 | orchestrator | 2026-03-09 00:41:54.982203 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-09 00:41:54.982210 | orchestrator | Monday 09 March 2026 00:41:51 +0000 (0:00:00.138) 0:00:11.375 ********** 2026-03-09 00:41:54.982217 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:54.982225 | orchestrator | 2026-03-09 00:41:54.982232 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-09 00:41:54.982239 | orchestrator | Monday 09 March 2026 00:41:51 +0000 (0:00:00.140) 0:00:11.516 ********** 2026-03-09 00:41:54.982246 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:54.982253 | orchestrator | 2026-03-09 00:41:54.982261 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-09 00:41:54.982268 | orchestrator | Monday 09 March 2026 00:41:51 +0000 (0:00:00.142) 0:00:11.659 ********** 2026-03-09 00:41:54.982276 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:54.982283 | orchestrator | 2026-03-09 00:41:54.982290 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-09 00:41:54.982297 | orchestrator | Monday 09 March 2026 00:41:51 +0000 (0:00:00.136) 0:00:11.795 ********** 2026-03-09 00:41:54.982305 | orchestrator | ok: [testbed-node-3] => { 2026-03-09 00:41:54.982312 | orchestrator |  "ceph_osd_devices": { 2026-03-09 00:41:54.982319 | orchestrator |  "sdb": { 2026-03-09 00:41:54.982327 | orchestrator |  "osd_lvm_uuid": "5d9cda85-a301-5b16-a7fe-308b162b7259" 2026-03-09 00:41:54.982334 | orchestrator |  }, 2026-03-09 00:41:54.982341 | orchestrator |  "sdc": { 2026-03-09 00:41:54.982348 | orchestrator |  "osd_lvm_uuid": "8734b320-4ffe-530d-8e73-0aec819257b4" 2026-03-09 00:41:54.982355 | orchestrator |  } 2026-03-09 00:41:54.982362 | orchestrator |  } 2026-03-09 00:41:54.982369 | orchestrator | } 2026-03-09 00:41:54.982376 | orchestrator | 2026-03-09 00:41:54.982383 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-09 00:41:54.982389 | orchestrator | Monday 09 March 2026 00:41:51 +0000 (0:00:00.142) 0:00:11.937 ********** 2026-03-09 00:41:54.982396 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:54.982402 | orchestrator | 2026-03-09 00:41:54.982409 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-09 00:41:54.982415 | orchestrator | Monday 09 March 2026 00:41:51 +0000 (0:00:00.140) 0:00:12.078 ********** 2026-03-09 00:41:54.982422 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:54.982428 | orchestrator | 2026-03-09 00:41:54.982435 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-09 00:41:54.982441 | orchestrator | Monday 09 March 2026 00:41:52 +0000 (0:00:00.138) 0:00:12.217 ********** 2026-03-09 00:41:54.982448 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:41:54.982455 | orchestrator | 2026-03-09 00:41:54.982461 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-09 00:41:54.982468 | orchestrator | Monday 09 March 2026 00:41:52 +0000 (0:00:00.131) 0:00:12.349 ********** 2026-03-09 00:41:54.982474 | orchestrator | changed: [testbed-node-3] => { 2026-03-09 00:41:54.982481 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-09 00:41:54.982487 | orchestrator |  "ceph_osd_devices": { 2026-03-09 00:41:54.982494 | orchestrator |  "sdb": { 2026-03-09 00:41:54.982500 | orchestrator |  "osd_lvm_uuid": "5d9cda85-a301-5b16-a7fe-308b162b7259" 2026-03-09 00:41:54.982507 | orchestrator |  }, 2026-03-09 00:41:54.982525 | orchestrator |  "sdc": { 2026-03-09 00:41:54.982531 | orchestrator |  "osd_lvm_uuid": "8734b320-4ffe-530d-8e73-0aec819257b4" 2026-03-09 00:41:54.982537 | orchestrator |  } 2026-03-09 00:41:54.982543 | orchestrator |  }, 2026-03-09 00:41:54.982550 | orchestrator |  "lvm_volumes": [ 2026-03-09 00:41:54.982557 | orchestrator |  { 2026-03-09 00:41:54.982563 | orchestrator |  "data": "osd-block-5d9cda85-a301-5b16-a7fe-308b162b7259", 2026-03-09 00:41:54.982570 | orchestrator |  "data_vg": "ceph-5d9cda85-a301-5b16-a7fe-308b162b7259" 2026-03-09 00:41:54.982583 | orchestrator |  }, 2026-03-09 00:41:54.982590 | orchestrator |  { 2026-03-09 00:41:54.982597 | orchestrator |  "data": "osd-block-8734b320-4ffe-530d-8e73-0aec819257b4", 2026-03-09 00:41:54.982604 | orchestrator |  "data_vg": "ceph-8734b320-4ffe-530d-8e73-0aec819257b4" 2026-03-09 00:41:54.982611 | orchestrator |  } 2026-03-09 00:41:54.982617 | orchestrator |  ] 2026-03-09 00:41:54.982624 | orchestrator |  } 2026-03-09 00:41:54.982631 | orchestrator | } 2026-03-09 00:41:54.982638 | orchestrator | 2026-03-09 00:41:54.982645 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-09 00:41:54.982652 | orchestrator | Monday 09 March 2026 00:41:52 +0000 (0:00:00.398) 0:00:12.747 ********** 2026-03-09 00:41:54.982659 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-09 00:41:54.982666 | orchestrator | 2026-03-09 00:41:54.982673 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-09 00:41:54.982679 | orchestrator | 2026-03-09 00:41:54.982686 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-09 00:41:54.982693 | orchestrator | Monday 09 March 2026 00:41:54 +0000 (0:00:01.816) 0:00:14.563 ********** 2026-03-09 00:41:54.982700 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-09 00:41:54.982707 | orchestrator | 2026-03-09 00:41:54.982714 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-09 00:41:54.982721 | orchestrator | Monday 09 March 2026 00:41:54 +0000 (0:00:00.257) 0:00:14.821 ********** 2026-03-09 00:41:54.982728 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:41:54.982735 | orchestrator | 2026-03-09 00:41:54.982746 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:02.313325 | orchestrator | Monday 09 March 2026 00:41:54 +0000 (0:00:00.251) 0:00:15.073 ********** 2026-03-09 00:42:02.313447 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-09 00:42:02.313464 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-09 00:42:02.313477 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-09 00:42:02.313488 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-09 00:42:02.313572 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-09 00:42:02.313588 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-09 00:42:02.313599 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-09 00:42:02.313616 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-09 00:42:02.313627 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-09 00:42:02.313639 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-09 00:42:02.313649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-09 00:42:02.313660 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-09 00:42:02.313691 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-09 00:42:02.313703 | orchestrator | 2026-03-09 00:42:02.313715 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:02.313726 | orchestrator | Monday 09 March 2026 00:41:55 +0000 (0:00:00.381) 0:00:15.455 ********** 2026-03-09 00:42:02.313737 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:02.313749 | orchestrator | 2026-03-09 00:42:02.313760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:02.313771 | orchestrator | Monday 09 March 2026 00:41:55 +0000 (0:00:00.206) 0:00:15.661 ********** 2026-03-09 00:42:02.313804 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:02.313816 | orchestrator | 2026-03-09 00:42:02.313827 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:02.313838 | orchestrator | Monday 09 March 2026 00:41:55 +0000 (0:00:00.189) 0:00:15.850 ********** 2026-03-09 00:42:02.313849 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:02.313860 | orchestrator | 2026-03-09 00:42:02.313873 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:02.313887 | orchestrator | Monday 09 March 2026 00:41:55 +0000 (0:00:00.192) 0:00:16.043 ********** 2026-03-09 00:42:02.313900 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:02.313912 | orchestrator | 2026-03-09 00:42:02.313925 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:02.313937 | orchestrator | Monday 09 March 2026 00:41:56 +0000 (0:00:00.182) 0:00:16.226 ********** 2026-03-09 00:42:02.313950 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:02.313963 | orchestrator | 2026-03-09 00:42:02.313975 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:02.313988 | orchestrator | Monday 09 March 2026 00:41:56 +0000 (0:00:00.613) 0:00:16.840 ********** 2026-03-09 00:42:02.314001 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:02.314013 | orchestrator | 2026-03-09 00:42:02.314075 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:02.314091 | orchestrator | Monday 09 March 2026 00:41:56 +0000 (0:00:00.202) 0:00:17.043 ********** 2026-03-09 00:42:02.314110 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:02.314129 | orchestrator | 2026-03-09 00:42:02.314148 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:02.314164 | orchestrator | Monday 09 March 2026 00:41:57 +0000 (0:00:00.197) 0:00:17.240 ********** 2026-03-09 00:42:02.314185 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:02.314205 | orchestrator | 2026-03-09 00:42:02.314219 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:02.314230 | orchestrator | Monday 09 March 2026 00:41:57 +0000 (0:00:00.200) 0:00:17.441 ********** 2026-03-09 00:42:02.314240 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2) 2026-03-09 00:42:02.314253 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2) 2026-03-09 00:42:02.314263 | orchestrator | 2026-03-09 00:42:02.314274 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:02.314285 | orchestrator | Monday 09 March 2026 00:41:57 +0000 (0:00:00.419) 0:00:17.860 ********** 2026-03-09 00:42:02.314296 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_11658218-3952-45bc-99ae-d48f4d257268) 2026-03-09 00:42:02.314307 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_11658218-3952-45bc-99ae-d48f4d257268) 2026-03-09 00:42:02.314318 | orchestrator | 2026-03-09 00:42:02.314329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:02.314341 | orchestrator | Monday 09 March 2026 00:41:58 +0000 (0:00:00.452) 0:00:18.313 ********** 2026-03-09 00:42:02.314352 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d43c938e-9c3c-4e95-bc09-26edff92b810) 2026-03-09 00:42:02.314363 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d43c938e-9c3c-4e95-bc09-26edff92b810) 2026-03-09 00:42:02.314374 | orchestrator | 2026-03-09 00:42:02.314386 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:02.314415 | orchestrator | Monday 09 March 2026 00:41:58 +0000 (0:00:00.439) 0:00:18.752 ********** 2026-03-09 00:42:02.314426 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3a13d83a-3534-4183-8691-9f150495a6dc) 2026-03-09 00:42:02.314437 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3a13d83a-3534-4183-8691-9f150495a6dc) 2026-03-09 00:42:02.314448 | orchestrator | 2026-03-09 00:42:02.314469 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:02.314480 | orchestrator | Monday 09 March 2026 00:41:59 +0000 (0:00:00.437) 0:00:19.189 ********** 2026-03-09 00:42:02.314491 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-09 00:42:02.314502 | orchestrator | 2026-03-09 00:42:02.314564 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:02.314577 | orchestrator | Monday 09 March 2026 00:41:59 +0000 (0:00:00.343) 0:00:19.533 ********** 2026-03-09 00:42:02.314588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-09 00:42:02.314599 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-09 00:42:02.314617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-09 00:42:02.314629 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-09 00:42:02.314640 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-09 00:42:02.314650 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-09 00:42:02.314661 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-09 00:42:02.314672 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-09 00:42:02.314683 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-09 00:42:02.314693 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-09 00:42:02.314704 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-09 00:42:02.314715 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-09 00:42:02.314726 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-09 00:42:02.314737 | orchestrator | 2026-03-09 00:42:02.314748 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:02.314758 | orchestrator | Monday 09 March 2026 00:41:59 +0000 (0:00:00.386) 0:00:19.919 ********** 2026-03-09 00:42:02.314769 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:02.314780 | orchestrator | 2026-03-09 00:42:02.314791 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:02.314802 | orchestrator | Monday 09 March 2026 00:42:00 +0000 (0:00:00.728) 0:00:20.648 ********** 2026-03-09 00:42:02.314813 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:02.314824 | orchestrator | 2026-03-09 00:42:02.314834 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:02.314846 | orchestrator | Monday 09 March 2026 00:42:00 +0000 (0:00:00.185) 0:00:20.833 ********** 2026-03-09 00:42:02.314856 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:02.314867 | orchestrator | 2026-03-09 00:42:02.314878 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:02.314889 | orchestrator | Monday 09 March 2026 00:42:00 +0000 (0:00:00.164) 0:00:20.997 ********** 2026-03-09 00:42:02.314900 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:02.314911 | orchestrator | 2026-03-09 00:42:02.314921 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:02.314932 | orchestrator | Monday 09 March 2026 00:42:01 +0000 (0:00:00.155) 0:00:21.153 ********** 2026-03-09 00:42:02.314943 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:02.314954 | orchestrator | 2026-03-09 00:42:02.314965 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:02.314976 | orchestrator | Monday 09 March 2026 00:42:01 +0000 (0:00:00.148) 0:00:21.301 ********** 2026-03-09 00:42:02.314987 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:02.315005 | orchestrator | 2026-03-09 00:42:02.315016 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:02.315027 | orchestrator | Monday 09 March 2026 00:42:01 +0000 (0:00:00.141) 0:00:21.443 ********** 2026-03-09 00:42:02.315038 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:02.315049 | orchestrator | 2026-03-09 00:42:02.315060 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:02.315070 | orchestrator | Monday 09 March 2026 00:42:01 +0000 (0:00:00.136) 0:00:21.579 ********** 2026-03-09 00:42:02.315081 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:02.315092 | orchestrator | 2026-03-09 00:42:02.315103 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:02.315114 | orchestrator | Monday 09 March 2026 00:42:01 +0000 (0:00:00.139) 0:00:21.719 ********** 2026-03-09 00:42:02.315125 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-09 00:42:02.315137 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-09 00:42:02.315148 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-09 00:42:02.315159 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-09 00:42:02.315170 | orchestrator | 2026-03-09 00:42:02.315181 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:02.315192 | orchestrator | Monday 09 March 2026 00:42:02 +0000 (0:00:00.603) 0:00:22.322 ********** 2026-03-09 00:42:02.315203 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:07.161341 | orchestrator | 2026-03-09 00:42:07.161469 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:07.161497 | orchestrator | Monday 09 March 2026 00:42:02 +0000 (0:00:00.142) 0:00:22.465 ********** 2026-03-09 00:42:07.161612 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:07.161637 | orchestrator | 2026-03-09 00:42:07.161656 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:07.161675 | orchestrator | Monday 09 March 2026 00:42:02 +0000 (0:00:00.141) 0:00:22.607 ********** 2026-03-09 00:42:07.161693 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:07.161713 | orchestrator | 2026-03-09 00:42:07.161732 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:07.161750 | orchestrator | Monday 09 March 2026 00:42:02 +0000 (0:00:00.149) 0:00:22.758 ********** 2026-03-09 00:42:07.161768 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:07.161787 | orchestrator | 2026-03-09 00:42:07.161806 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-09 00:42:07.161823 | orchestrator | Monday 09 March 2026 00:42:03 +0000 (0:00:00.425) 0:00:23.183 ********** 2026-03-09 00:42:07.161841 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-09 00:42:07.161860 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-09 00:42:07.161878 | orchestrator | 2026-03-09 00:42:07.161899 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-09 00:42:07.161944 | orchestrator | Monday 09 March 2026 00:42:03 +0000 (0:00:00.153) 0:00:23.337 ********** 2026-03-09 00:42:07.161964 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:07.161984 | orchestrator | 2026-03-09 00:42:07.162002 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-09 00:42:07.162092 | orchestrator | Monday 09 March 2026 00:42:03 +0000 (0:00:00.138) 0:00:23.475 ********** 2026-03-09 00:42:07.162114 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:07.162133 | orchestrator | 2026-03-09 00:42:07.162152 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-09 00:42:07.162173 | orchestrator | Monday 09 March 2026 00:42:03 +0000 (0:00:00.089) 0:00:23.566 ********** 2026-03-09 00:42:07.162187 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:07.162200 | orchestrator | 2026-03-09 00:42:07.162211 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-09 00:42:07.162222 | orchestrator | Monday 09 March 2026 00:42:03 +0000 (0:00:00.095) 0:00:23.662 ********** 2026-03-09 00:42:07.162261 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:42:07.162273 | orchestrator | 2026-03-09 00:42:07.162285 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-09 00:42:07.162296 | orchestrator | Monday 09 March 2026 00:42:03 +0000 (0:00:00.097) 0:00:23.759 ********** 2026-03-09 00:42:07.162307 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'deb603ca-2db3-5399-8e8d-1e0d01641e0c'}}) 2026-03-09 00:42:07.162319 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c1f67558-6290-50a7-9c09-ea5e74fb08ab'}}) 2026-03-09 00:42:07.162330 | orchestrator | 2026-03-09 00:42:07.162341 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-09 00:42:07.162351 | orchestrator | Monday 09 March 2026 00:42:03 +0000 (0:00:00.145) 0:00:23.905 ********** 2026-03-09 00:42:07.162363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'deb603ca-2db3-5399-8e8d-1e0d01641e0c'}})  2026-03-09 00:42:07.162376 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c1f67558-6290-50a7-9c09-ea5e74fb08ab'}})  2026-03-09 00:42:07.162386 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:07.162397 | orchestrator | 2026-03-09 00:42:07.162415 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-09 00:42:07.162433 | orchestrator | Monday 09 March 2026 00:42:03 +0000 (0:00:00.115) 0:00:24.021 ********** 2026-03-09 00:42:07.162459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'deb603ca-2db3-5399-8e8d-1e0d01641e0c'}})  2026-03-09 00:42:07.162483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c1f67558-6290-50a7-9c09-ea5e74fb08ab'}})  2026-03-09 00:42:07.162501 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:07.162550 | orchestrator | 2026-03-09 00:42:07.162568 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-09 00:42:07.162587 | orchestrator | Monday 09 March 2026 00:42:04 +0000 (0:00:00.127) 0:00:24.149 ********** 2026-03-09 00:42:07.162605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'deb603ca-2db3-5399-8e8d-1e0d01641e0c'}})  2026-03-09 00:42:07.162622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c1f67558-6290-50a7-9c09-ea5e74fb08ab'}})  2026-03-09 00:42:07.162637 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:07.162654 | orchestrator | 2026-03-09 00:42:07.162673 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-09 00:42:07.162689 | orchestrator | Monday 09 March 2026 00:42:04 +0000 (0:00:00.127) 0:00:24.276 ********** 2026-03-09 00:42:07.162706 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:42:07.162722 | orchestrator | 2026-03-09 00:42:07.162741 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-09 00:42:07.162758 | orchestrator | Monday 09 March 2026 00:42:04 +0000 (0:00:00.111) 0:00:24.388 ********** 2026-03-09 00:42:07.162776 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:42:07.162794 | orchestrator | 2026-03-09 00:42:07.162812 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-09 00:42:07.162830 | orchestrator | Monday 09 March 2026 00:42:04 +0000 (0:00:00.130) 0:00:24.518 ********** 2026-03-09 00:42:07.162879 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:07.162900 | orchestrator | 2026-03-09 00:42:07.162918 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-09 00:42:07.162938 | orchestrator | Monday 09 March 2026 00:42:04 +0000 (0:00:00.255) 0:00:24.774 ********** 2026-03-09 00:42:07.162958 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:07.162976 | orchestrator | 2026-03-09 00:42:07.162995 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-09 00:42:07.163014 | orchestrator | Monday 09 March 2026 00:42:04 +0000 (0:00:00.111) 0:00:24.885 ********** 2026-03-09 00:42:07.163033 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:07.163067 | orchestrator | 2026-03-09 00:42:07.163086 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-09 00:42:07.163105 | orchestrator | Monday 09 March 2026 00:42:04 +0000 (0:00:00.109) 0:00:24.994 ********** 2026-03-09 00:42:07.163124 | orchestrator | ok: [testbed-node-4] => { 2026-03-09 00:42:07.163143 | orchestrator |  "ceph_osd_devices": { 2026-03-09 00:42:07.163162 | orchestrator |  "sdb": { 2026-03-09 00:42:07.163181 | orchestrator |  "osd_lvm_uuid": "deb603ca-2db3-5399-8e8d-1e0d01641e0c" 2026-03-09 00:42:07.163200 | orchestrator |  }, 2026-03-09 00:42:07.163220 | orchestrator |  "sdc": { 2026-03-09 00:42:07.163237 | orchestrator |  "osd_lvm_uuid": "c1f67558-6290-50a7-9c09-ea5e74fb08ab" 2026-03-09 00:42:07.163256 | orchestrator |  } 2026-03-09 00:42:07.163274 | orchestrator |  } 2026-03-09 00:42:07.163293 | orchestrator | } 2026-03-09 00:42:07.163312 | orchestrator | 2026-03-09 00:42:07.163331 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-09 00:42:07.163349 | orchestrator | Monday 09 March 2026 00:42:05 +0000 (0:00:00.110) 0:00:25.105 ********** 2026-03-09 00:42:07.163367 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:07.163386 | orchestrator | 2026-03-09 00:42:07.163404 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-09 00:42:07.163424 | orchestrator | Monday 09 March 2026 00:42:05 +0000 (0:00:00.114) 0:00:25.220 ********** 2026-03-09 00:42:07.163442 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:07.163461 | orchestrator | 2026-03-09 00:42:07.163477 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-09 00:42:07.163495 | orchestrator | Monday 09 March 2026 00:42:05 +0000 (0:00:00.136) 0:00:25.357 ********** 2026-03-09 00:42:07.163541 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:42:07.163561 | orchestrator | 2026-03-09 00:42:07.163581 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-09 00:42:07.163615 | orchestrator | Monday 09 March 2026 00:42:05 +0000 (0:00:00.088) 0:00:25.446 ********** 2026-03-09 00:42:07.163635 | orchestrator | changed: [testbed-node-4] => { 2026-03-09 00:42:07.163653 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-09 00:42:07.163672 | orchestrator |  "ceph_osd_devices": { 2026-03-09 00:42:07.163684 | orchestrator |  "sdb": { 2026-03-09 00:42:07.163696 | orchestrator |  "osd_lvm_uuid": "deb603ca-2db3-5399-8e8d-1e0d01641e0c" 2026-03-09 00:42:07.163708 | orchestrator |  }, 2026-03-09 00:42:07.163728 | orchestrator |  "sdc": { 2026-03-09 00:42:07.163746 | orchestrator |  "osd_lvm_uuid": "c1f67558-6290-50a7-9c09-ea5e74fb08ab" 2026-03-09 00:42:07.163764 | orchestrator |  } 2026-03-09 00:42:07.163782 | orchestrator |  }, 2026-03-09 00:42:07.163799 | orchestrator |  "lvm_volumes": [ 2026-03-09 00:42:07.163817 | orchestrator |  { 2026-03-09 00:42:07.163834 | orchestrator |  "data": "osd-block-deb603ca-2db3-5399-8e8d-1e0d01641e0c", 2026-03-09 00:42:07.163850 | orchestrator |  "data_vg": "ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c" 2026-03-09 00:42:07.163866 | orchestrator |  }, 2026-03-09 00:42:07.163882 | orchestrator |  { 2026-03-09 00:42:07.163900 | orchestrator |  "data": "osd-block-c1f67558-6290-50a7-9c09-ea5e74fb08ab", 2026-03-09 00:42:07.163917 | orchestrator |  "data_vg": "ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab" 2026-03-09 00:42:07.163935 | orchestrator |  } 2026-03-09 00:42:07.163954 | orchestrator |  ] 2026-03-09 00:42:07.163971 | orchestrator |  } 2026-03-09 00:42:07.163987 | orchestrator | } 2026-03-09 00:42:07.164005 | orchestrator | 2026-03-09 00:42:07.164023 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-09 00:42:07.164040 | orchestrator | Monday 09 March 2026 00:42:05 +0000 (0:00:00.164) 0:00:25.611 ********** 2026-03-09 00:42:07.164058 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-09 00:42:07.164076 | orchestrator | 2026-03-09 00:42:07.164112 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-09 00:42:07.164128 | orchestrator | 2026-03-09 00:42:07.164146 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-09 00:42:07.164163 | orchestrator | Monday 09 March 2026 00:42:06 +0000 (0:00:00.777) 0:00:26.388 ********** 2026-03-09 00:42:07.164180 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-09 00:42:07.164197 | orchestrator | 2026-03-09 00:42:07.164215 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-09 00:42:07.164231 | orchestrator | Monday 09 March 2026 00:42:06 +0000 (0:00:00.469) 0:00:26.858 ********** 2026-03-09 00:42:07.164248 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:42:07.164264 | orchestrator | 2026-03-09 00:42:07.164281 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:07.164298 | orchestrator | Monday 09 March 2026 00:42:06 +0000 (0:00:00.180) 0:00:27.038 ********** 2026-03-09 00:42:07.164316 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-09 00:42:07.164334 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-09 00:42:07.164352 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-09 00:42:07.164370 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-09 00:42:07.164387 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-09 00:42:07.164425 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-09 00:42:14.427126 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-09 00:42:14.427246 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-09 00:42:14.427270 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-09 00:42:14.427290 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-09 00:42:14.427327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-09 00:42:14.427347 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-09 00:42:14.427366 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-09 00:42:14.427384 | orchestrator | 2026-03-09 00:42:14.427405 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:14.427424 | orchestrator | Monday 09 March 2026 00:42:07 +0000 (0:00:00.276) 0:00:27.315 ********** 2026-03-09 00:42:14.427443 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:14.427464 | orchestrator | 2026-03-09 00:42:14.427482 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:14.427500 | orchestrator | Monday 09 March 2026 00:42:07 +0000 (0:00:00.161) 0:00:27.476 ********** 2026-03-09 00:42:14.427548 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:14.427567 | orchestrator | 2026-03-09 00:42:14.427586 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:14.427607 | orchestrator | Monday 09 March 2026 00:42:07 +0000 (0:00:00.183) 0:00:27.660 ********** 2026-03-09 00:42:14.427628 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:14.427646 | orchestrator | 2026-03-09 00:42:14.427665 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:14.427681 | orchestrator | Monday 09 March 2026 00:42:07 +0000 (0:00:00.156) 0:00:27.816 ********** 2026-03-09 00:42:14.427701 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:14.427721 | orchestrator | 2026-03-09 00:42:14.427741 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:14.427760 | orchestrator | Monday 09 March 2026 00:42:07 +0000 (0:00:00.180) 0:00:27.997 ********** 2026-03-09 00:42:14.427811 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:14.427832 | orchestrator | 2026-03-09 00:42:14.427850 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:14.427870 | orchestrator | Monday 09 March 2026 00:42:08 +0000 (0:00:00.159) 0:00:28.156 ********** 2026-03-09 00:42:14.427890 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:14.427910 | orchestrator | 2026-03-09 00:42:14.427930 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:14.427950 | orchestrator | Monday 09 March 2026 00:42:08 +0000 (0:00:00.169) 0:00:28.325 ********** 2026-03-09 00:42:14.427971 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:14.427991 | orchestrator | 2026-03-09 00:42:14.428012 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:14.428033 | orchestrator | Monday 09 March 2026 00:42:08 +0000 (0:00:00.201) 0:00:28.527 ********** 2026-03-09 00:42:14.428053 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:14.428073 | orchestrator | 2026-03-09 00:42:14.428093 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:14.428112 | orchestrator | Monday 09 March 2026 00:42:08 +0000 (0:00:00.187) 0:00:28.714 ********** 2026-03-09 00:42:14.428133 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a) 2026-03-09 00:42:14.428155 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a) 2026-03-09 00:42:14.428175 | orchestrator | 2026-03-09 00:42:14.428194 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:14.428215 | orchestrator | Monday 09 March 2026 00:42:09 +0000 (0:00:00.681) 0:00:29.395 ********** 2026-03-09 00:42:14.428257 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_34bdd215-cdf5-4909-8dd4-972bf1b79030) 2026-03-09 00:42:14.428280 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_34bdd215-cdf5-4909-8dd4-972bf1b79030) 2026-03-09 00:42:14.428301 | orchestrator | 2026-03-09 00:42:14.428321 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:14.428340 | orchestrator | Monday 09 March 2026 00:42:09 +0000 (0:00:00.372) 0:00:29.768 ********** 2026-03-09 00:42:14.428362 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_709b939c-9ac4-47b1-b5c3-cb1d8710b2fd) 2026-03-09 00:42:14.428383 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_709b939c-9ac4-47b1-b5c3-cb1d8710b2fd) 2026-03-09 00:42:14.428403 | orchestrator | 2026-03-09 00:42:14.428422 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:14.428441 | orchestrator | Monday 09 March 2026 00:42:10 +0000 (0:00:00.383) 0:00:30.151 ********** 2026-03-09 00:42:14.428461 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_069ee836-7f84-4f9f-9b43-0fd45db025c2) 2026-03-09 00:42:14.428480 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_069ee836-7f84-4f9f-9b43-0fd45db025c2) 2026-03-09 00:42:14.428498 | orchestrator | 2026-03-09 00:42:14.428544 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:42:14.428565 | orchestrator | Monday 09 March 2026 00:42:10 +0000 (0:00:00.384) 0:00:30.536 ********** 2026-03-09 00:42:14.428582 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-09 00:42:14.428600 | orchestrator | 2026-03-09 00:42:14.428617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:14.428657 | orchestrator | Monday 09 March 2026 00:42:10 +0000 (0:00:00.316) 0:00:30.853 ********** 2026-03-09 00:42:14.428668 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-09 00:42:14.428678 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-09 00:42:14.428688 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-09 00:42:14.428698 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-09 00:42:14.428716 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-09 00:42:14.428725 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-09 00:42:14.428735 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-09 00:42:14.428744 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-09 00:42:14.428754 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-09 00:42:14.428763 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-09 00:42:14.428773 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-09 00:42:14.428782 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-09 00:42:14.428792 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-09 00:42:14.428801 | orchestrator | 2026-03-09 00:42:14.428811 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:14.428820 | orchestrator | Monday 09 March 2026 00:42:11 +0000 (0:00:00.371) 0:00:31.224 ********** 2026-03-09 00:42:14.428830 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:14.428840 | orchestrator | 2026-03-09 00:42:14.428849 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:14.428859 | orchestrator | Monday 09 March 2026 00:42:11 +0000 (0:00:00.187) 0:00:31.412 ********** 2026-03-09 00:42:14.428868 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:14.428878 | orchestrator | 2026-03-09 00:42:14.428888 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:14.428897 | orchestrator | Monday 09 March 2026 00:42:11 +0000 (0:00:00.190) 0:00:31.602 ********** 2026-03-09 00:42:14.428907 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:14.428916 | orchestrator | 2026-03-09 00:42:14.428926 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:14.428936 | orchestrator | Monday 09 March 2026 00:42:11 +0000 (0:00:00.192) 0:00:31.795 ********** 2026-03-09 00:42:14.428945 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:14.428955 | orchestrator | 2026-03-09 00:42:14.428964 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:14.428974 | orchestrator | Monday 09 March 2026 00:42:11 +0000 (0:00:00.195) 0:00:31.990 ********** 2026-03-09 00:42:14.428983 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:14.428993 | orchestrator | 2026-03-09 00:42:14.429003 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:14.429012 | orchestrator | Monday 09 March 2026 00:42:12 +0000 (0:00:00.223) 0:00:32.214 ********** 2026-03-09 00:42:14.429022 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:14.429031 | orchestrator | 2026-03-09 00:42:14.429041 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:14.429050 | orchestrator | Monday 09 March 2026 00:42:12 +0000 (0:00:00.520) 0:00:32.734 ********** 2026-03-09 00:42:14.429060 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:14.429069 | orchestrator | 2026-03-09 00:42:14.429079 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:14.429088 | orchestrator | Monday 09 March 2026 00:42:12 +0000 (0:00:00.175) 0:00:32.910 ********** 2026-03-09 00:42:14.429098 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:14.429108 | orchestrator | 2026-03-09 00:42:14.429117 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:14.429126 | orchestrator | Monday 09 March 2026 00:42:13 +0000 (0:00:00.189) 0:00:33.099 ********** 2026-03-09 00:42:14.429136 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-09 00:42:14.429152 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-09 00:42:14.429162 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-09 00:42:14.429172 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-09 00:42:14.429181 | orchestrator | 2026-03-09 00:42:14.429191 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:14.429200 | orchestrator | Monday 09 March 2026 00:42:13 +0000 (0:00:00.603) 0:00:33.703 ********** 2026-03-09 00:42:14.429210 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:14.429219 | orchestrator | 2026-03-09 00:42:14.429229 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:14.429239 | orchestrator | Monday 09 March 2026 00:42:13 +0000 (0:00:00.198) 0:00:33.901 ********** 2026-03-09 00:42:14.429248 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:14.429257 | orchestrator | 2026-03-09 00:42:14.429267 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:14.429277 | orchestrator | Monday 09 March 2026 00:42:14 +0000 (0:00:00.208) 0:00:34.109 ********** 2026-03-09 00:42:14.429286 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:14.429295 | orchestrator | 2026-03-09 00:42:14.429305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:42:14.429315 | orchestrator | Monday 09 March 2026 00:42:14 +0000 (0:00:00.216) 0:00:34.325 ********** 2026-03-09 00:42:14.429324 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:14.429334 | orchestrator | 2026-03-09 00:42:14.429350 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-09 00:42:18.310364 | orchestrator | Monday 09 March 2026 00:42:14 +0000 (0:00:00.192) 0:00:34.518 ********** 2026-03-09 00:42:18.310465 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-09 00:42:18.310480 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-09 00:42:18.310492 | orchestrator | 2026-03-09 00:42:18.310504 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-09 00:42:18.310563 | orchestrator | Monday 09 March 2026 00:42:14 +0000 (0:00:00.168) 0:00:34.687 ********** 2026-03-09 00:42:18.310575 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:18.310587 | orchestrator | 2026-03-09 00:42:18.310598 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-09 00:42:18.310609 | orchestrator | Monday 09 March 2026 00:42:14 +0000 (0:00:00.125) 0:00:34.812 ********** 2026-03-09 00:42:18.310640 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:18.310652 | orchestrator | 2026-03-09 00:42:18.310663 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-09 00:42:18.310674 | orchestrator | Monday 09 March 2026 00:42:14 +0000 (0:00:00.153) 0:00:34.965 ********** 2026-03-09 00:42:18.310685 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:18.310696 | orchestrator | 2026-03-09 00:42:18.310708 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-09 00:42:18.310719 | orchestrator | Monday 09 March 2026 00:42:15 +0000 (0:00:00.358) 0:00:35.324 ********** 2026-03-09 00:42:18.310730 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:42:18.310743 | orchestrator | 2026-03-09 00:42:18.310754 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-09 00:42:18.310765 | orchestrator | Monday 09 March 2026 00:42:15 +0000 (0:00:00.156) 0:00:35.481 ********** 2026-03-09 00:42:18.310776 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5d8e344b-ecd1-5c90-b783-cb125ac7004a'}}) 2026-03-09 00:42:18.310793 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd6be2487-d224-518f-9009-30806e6fa587'}}) 2026-03-09 00:42:18.310804 | orchestrator | 2026-03-09 00:42:18.310815 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-09 00:42:18.310826 | orchestrator | Monday 09 March 2026 00:42:15 +0000 (0:00:00.193) 0:00:35.675 ********** 2026-03-09 00:42:18.310838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5d8e344b-ecd1-5c90-b783-cb125ac7004a'}})  2026-03-09 00:42:18.310870 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd6be2487-d224-518f-9009-30806e6fa587'}})  2026-03-09 00:42:18.310881 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:18.310892 | orchestrator | 2026-03-09 00:42:18.310905 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-09 00:42:18.310917 | orchestrator | Monday 09 March 2026 00:42:15 +0000 (0:00:00.163) 0:00:35.838 ********** 2026-03-09 00:42:18.310930 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5d8e344b-ecd1-5c90-b783-cb125ac7004a'}})  2026-03-09 00:42:18.310943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd6be2487-d224-518f-9009-30806e6fa587'}})  2026-03-09 00:42:18.310956 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:18.310968 | orchestrator | 2026-03-09 00:42:18.310980 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-09 00:42:18.310992 | orchestrator | Monday 09 March 2026 00:42:15 +0000 (0:00:00.140) 0:00:35.978 ********** 2026-03-09 00:42:18.311005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5d8e344b-ecd1-5c90-b783-cb125ac7004a'}})  2026-03-09 00:42:18.311018 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd6be2487-d224-518f-9009-30806e6fa587'}})  2026-03-09 00:42:18.311030 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:18.311043 | orchestrator | 2026-03-09 00:42:18.311055 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-09 00:42:18.311068 | orchestrator | Monday 09 March 2026 00:42:15 +0000 (0:00:00.112) 0:00:36.091 ********** 2026-03-09 00:42:18.311080 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:42:18.311092 | orchestrator | 2026-03-09 00:42:18.311105 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-09 00:42:18.311117 | orchestrator | Monday 09 March 2026 00:42:16 +0000 (0:00:00.141) 0:00:36.233 ********** 2026-03-09 00:42:18.311129 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:42:18.311141 | orchestrator | 2026-03-09 00:42:18.311154 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-09 00:42:18.311167 | orchestrator | Monday 09 March 2026 00:42:16 +0000 (0:00:00.120) 0:00:36.354 ********** 2026-03-09 00:42:18.311180 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:18.311192 | orchestrator | 2026-03-09 00:42:18.311202 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-09 00:42:18.311213 | orchestrator | Monday 09 March 2026 00:42:16 +0000 (0:00:00.096) 0:00:36.451 ********** 2026-03-09 00:42:18.311224 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:18.311235 | orchestrator | 2026-03-09 00:42:18.311246 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-09 00:42:18.311257 | orchestrator | Monday 09 March 2026 00:42:16 +0000 (0:00:00.095) 0:00:36.547 ********** 2026-03-09 00:42:18.311267 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:18.311278 | orchestrator | 2026-03-09 00:42:18.311289 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-09 00:42:18.311300 | orchestrator | Monday 09 March 2026 00:42:16 +0000 (0:00:00.099) 0:00:36.646 ********** 2026-03-09 00:42:18.311310 | orchestrator | ok: [testbed-node-5] => { 2026-03-09 00:42:18.311321 | orchestrator |  "ceph_osd_devices": { 2026-03-09 00:42:18.311332 | orchestrator |  "sdb": { 2026-03-09 00:42:18.311360 | orchestrator |  "osd_lvm_uuid": "5d8e344b-ecd1-5c90-b783-cb125ac7004a" 2026-03-09 00:42:18.311372 | orchestrator |  }, 2026-03-09 00:42:18.311383 | orchestrator |  "sdc": { 2026-03-09 00:42:18.311394 | orchestrator |  "osd_lvm_uuid": "d6be2487-d224-518f-9009-30806e6fa587" 2026-03-09 00:42:18.311405 | orchestrator |  } 2026-03-09 00:42:18.311416 | orchestrator |  } 2026-03-09 00:42:18.311427 | orchestrator | } 2026-03-09 00:42:18.311438 | orchestrator | 2026-03-09 00:42:18.311469 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-09 00:42:18.311481 | orchestrator | Monday 09 March 2026 00:42:16 +0000 (0:00:00.118) 0:00:36.764 ********** 2026-03-09 00:42:18.311492 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:18.311503 | orchestrator | 2026-03-09 00:42:18.311530 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-09 00:42:18.311542 | orchestrator | Monday 09 March 2026 00:42:16 +0000 (0:00:00.328) 0:00:37.093 ********** 2026-03-09 00:42:18.311552 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:18.311563 | orchestrator | 2026-03-09 00:42:18.311574 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-09 00:42:18.311584 | orchestrator | Monday 09 March 2026 00:42:17 +0000 (0:00:00.136) 0:00:37.229 ********** 2026-03-09 00:42:18.311595 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:42:18.311606 | orchestrator | 2026-03-09 00:42:18.311617 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-09 00:42:18.311628 | orchestrator | Monday 09 March 2026 00:42:17 +0000 (0:00:00.133) 0:00:37.363 ********** 2026-03-09 00:42:18.311638 | orchestrator | changed: [testbed-node-5] => { 2026-03-09 00:42:18.311650 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-09 00:42:18.311660 | orchestrator |  "ceph_osd_devices": { 2026-03-09 00:42:18.311672 | orchestrator |  "sdb": { 2026-03-09 00:42:18.311682 | orchestrator |  "osd_lvm_uuid": "5d8e344b-ecd1-5c90-b783-cb125ac7004a" 2026-03-09 00:42:18.311693 | orchestrator |  }, 2026-03-09 00:42:18.311704 | orchestrator |  "sdc": { 2026-03-09 00:42:18.311715 | orchestrator |  "osd_lvm_uuid": "d6be2487-d224-518f-9009-30806e6fa587" 2026-03-09 00:42:18.311726 | orchestrator |  } 2026-03-09 00:42:18.311737 | orchestrator |  }, 2026-03-09 00:42:18.311748 | orchestrator |  "lvm_volumes": [ 2026-03-09 00:42:18.311759 | orchestrator |  { 2026-03-09 00:42:18.311770 | orchestrator |  "data": "osd-block-5d8e344b-ecd1-5c90-b783-cb125ac7004a", 2026-03-09 00:42:18.311781 | orchestrator |  "data_vg": "ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a" 2026-03-09 00:42:18.311792 | orchestrator |  }, 2026-03-09 00:42:18.311807 | orchestrator |  { 2026-03-09 00:42:18.311819 | orchestrator |  "data": "osd-block-d6be2487-d224-518f-9009-30806e6fa587", 2026-03-09 00:42:18.311830 | orchestrator |  "data_vg": "ceph-d6be2487-d224-518f-9009-30806e6fa587" 2026-03-09 00:42:18.311841 | orchestrator |  } 2026-03-09 00:42:18.311852 | orchestrator |  ] 2026-03-09 00:42:18.311863 | orchestrator |  } 2026-03-09 00:42:18.311874 | orchestrator | } 2026-03-09 00:42:18.311885 | orchestrator | 2026-03-09 00:42:18.311895 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-09 00:42:18.311906 | orchestrator | Monday 09 March 2026 00:42:17 +0000 (0:00:00.224) 0:00:37.587 ********** 2026-03-09 00:42:18.311917 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-09 00:42:18.311928 | orchestrator | 2026-03-09 00:42:18.311939 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:42:18.311950 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-09 00:42:18.311962 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-09 00:42:18.311973 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-09 00:42:18.311984 | orchestrator | 2026-03-09 00:42:18.311995 | orchestrator | 2026-03-09 00:42:18.312006 | orchestrator | 2026-03-09 00:42:18.312017 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:42:18.312028 | orchestrator | Monday 09 March 2026 00:42:18 +0000 (0:00:00.808) 0:00:38.396 ********** 2026-03-09 00:42:18.312047 | orchestrator | =============================================================================== 2026-03-09 00:42:18.312058 | orchestrator | Write configuration file ------------------------------------------------ 3.40s 2026-03-09 00:42:18.312068 | orchestrator | Add known links to the list of available block devices ------------------ 1.15s 2026-03-09 00:42:18.312086 | orchestrator | Add known partitions to the list of available block devices ------------- 1.15s 2026-03-09 00:42:18.312097 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.96s 2026-03-09 00:42:18.312108 | orchestrator | Add known partitions to the list of available block devices ------------- 0.96s 2026-03-09 00:42:18.312119 | orchestrator | Add known links to the list of available block devices ------------------ 0.89s 2026-03-09 00:42:18.312130 | orchestrator | Print configuration data ------------------------------------------------ 0.79s 2026-03-09 00:42:18.312140 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2026-03-09 00:42:18.312151 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-03-09 00:42:18.312162 | orchestrator | Get initial list of available block devices ----------------------------- 0.68s 2026-03-09 00:42:18.312173 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-03-09 00:42:18.312184 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2026-03-09 00:42:18.312195 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-03-09 00:42:18.312212 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2026-03-09 00:42:18.562873 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2026-03-09 00:42:18.562950 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.59s 2026-03-09 00:42:18.562959 | orchestrator | Print WAL devices ------------------------------------------------------- 0.58s 2026-03-09 00:42:18.562966 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.58s 2026-03-09 00:42:18.562973 | orchestrator | Add known partitions to the list of available block devices ------------- 0.52s 2026-03-09 00:42:18.562979 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.51s 2026-03-09 00:42:41.091798 | orchestrator | 2026-03-09 00:42:41 | INFO  | Task bb9edd5c-c43b-42d4-bd08-48c916a150b2 (sync inventory) is running in background. Output coming soon. 2026-03-09 00:43:06.655614 | orchestrator | 2026-03-09 00:42:42 | INFO  | Starting group_vars file reorganization 2026-03-09 00:43:06.655690 | orchestrator | 2026-03-09 00:42:42 | INFO  | Moved 0 file(s) to their respective directories 2026-03-09 00:43:06.655699 | orchestrator | 2026-03-09 00:42:42 | INFO  | Group_vars file reorganization completed 2026-03-09 00:43:06.655704 | orchestrator | 2026-03-09 00:42:46 | INFO  | Starting variable preparation from inventory 2026-03-09 00:43:06.655710 | orchestrator | 2026-03-09 00:42:48 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-09 00:43:06.655716 | orchestrator | 2026-03-09 00:42:48 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-09 00:43:06.655736 | orchestrator | 2026-03-09 00:42:48 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-09 00:43:06.655741 | orchestrator | 2026-03-09 00:42:48 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-09 00:43:06.655746 | orchestrator | 2026-03-09 00:42:48 | INFO  | Variable preparation completed 2026-03-09 00:43:06.655752 | orchestrator | 2026-03-09 00:42:50 | INFO  | Starting inventory overwrite handling 2026-03-09 00:43:06.655757 | orchestrator | 2026-03-09 00:42:50 | INFO  | Handling group overwrites in 99-overwrite 2026-03-09 00:43:06.655762 | orchestrator | 2026-03-09 00:42:50 | INFO  | Removing group frr:children from 60-generic 2026-03-09 00:43:06.655784 | orchestrator | 2026-03-09 00:42:50 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-09 00:43:06.655790 | orchestrator | 2026-03-09 00:42:50 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-09 00:43:06.655795 | orchestrator | 2026-03-09 00:42:50 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-09 00:43:06.655800 | orchestrator | 2026-03-09 00:42:50 | INFO  | Handling group overwrites in 20-roles 2026-03-09 00:43:06.655805 | orchestrator | 2026-03-09 00:42:50 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-09 00:43:06.655810 | orchestrator | 2026-03-09 00:42:50 | INFO  | Removed 5 group(s) in total 2026-03-09 00:43:06.655815 | orchestrator | 2026-03-09 00:42:50 | INFO  | Inventory overwrite handling completed 2026-03-09 00:43:06.655820 | orchestrator | 2026-03-09 00:42:51 | INFO  | Starting merge of inventory files 2026-03-09 00:43:06.655824 | orchestrator | 2026-03-09 00:42:51 | INFO  | Inventory files merged successfully 2026-03-09 00:43:06.655829 | orchestrator | 2026-03-09 00:42:55 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-09 00:43:06.655834 | orchestrator | 2026-03-09 00:43:05 | INFO  | Successfully wrote ClusterShell configuration 2026-03-09 00:43:06.655840 | orchestrator | [master 112b739] 2026-03-09-00-43 2026-03-09 00:43:06.655846 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-09 00:43:08.590993 | orchestrator | 2026-03-09 00:43:08 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-03-09 00:43:08.652977 | orchestrator | 2026-03-09 00:43:08 | INFO  | Task c56f2e7e-ef92-48ba-bceb-b384f63bcbe2 (ceph-create-lvm-devices) was prepared for execution. 2026-03-09 00:43:08.653028 | orchestrator | 2026-03-09 00:43:08 | INFO  | It takes a moment until task c56f2e7e-ef92-48ba-bceb-b384f63bcbe2 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-09 00:43:19.791456 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-09 00:43:19.791571 | orchestrator | 2.16.14 2026-03-09 00:43:19.791583 | orchestrator | 2026-03-09 00:43:19.791593 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-09 00:43:19.791602 | orchestrator | 2026-03-09 00:43:19.791610 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-09 00:43:19.791619 | orchestrator | Monday 09 March 2026 00:43:13 +0000 (0:00:00.304) 0:00:00.304 ********** 2026-03-09 00:43:19.791628 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-09 00:43:19.791637 | orchestrator | 2026-03-09 00:43:19.791645 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-09 00:43:19.791653 | orchestrator | Monday 09 March 2026 00:43:13 +0000 (0:00:00.222) 0:00:00.526 ********** 2026-03-09 00:43:19.791661 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:43:19.791669 | orchestrator | 2026-03-09 00:43:19.791677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.791685 | orchestrator | Monday 09 March 2026 00:43:13 +0000 (0:00:00.216) 0:00:00.743 ********** 2026-03-09 00:43:19.791693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-09 00:43:19.791701 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-09 00:43:19.791709 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-09 00:43:19.791717 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-09 00:43:19.791725 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-09 00:43:19.791733 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-09 00:43:19.791741 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-09 00:43:19.791767 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-09 00:43:19.791775 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-09 00:43:19.791783 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-09 00:43:19.791791 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-09 00:43:19.791798 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-09 00:43:19.791806 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-09 00:43:19.791814 | orchestrator | 2026-03-09 00:43:19.791822 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.791830 | orchestrator | Monday 09 March 2026 00:43:14 +0000 (0:00:00.499) 0:00:01.242 ********** 2026-03-09 00:43:19.791838 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:19.791846 | orchestrator | 2026-03-09 00:43:19.791853 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.791861 | orchestrator | Monday 09 March 2026 00:43:14 +0000 (0:00:00.183) 0:00:01.426 ********** 2026-03-09 00:43:19.791869 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:19.791877 | orchestrator | 2026-03-09 00:43:19.791885 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.791893 | orchestrator | Monday 09 March 2026 00:43:14 +0000 (0:00:00.181) 0:00:01.607 ********** 2026-03-09 00:43:19.791900 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:19.791908 | orchestrator | 2026-03-09 00:43:19.791916 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.791924 | orchestrator | Monday 09 March 2026 00:43:14 +0000 (0:00:00.171) 0:00:01.779 ********** 2026-03-09 00:43:19.791932 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:19.791940 | orchestrator | 2026-03-09 00:43:19.791948 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.791956 | orchestrator | Monday 09 March 2026 00:43:14 +0000 (0:00:00.188) 0:00:01.967 ********** 2026-03-09 00:43:19.791963 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:19.791971 | orchestrator | 2026-03-09 00:43:19.791979 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.792001 | orchestrator | Monday 09 March 2026 00:43:15 +0000 (0:00:00.196) 0:00:02.164 ********** 2026-03-09 00:43:19.792010 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:19.792019 | orchestrator | 2026-03-09 00:43:19.792029 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.792038 | orchestrator | Monday 09 March 2026 00:43:15 +0000 (0:00:00.188) 0:00:02.352 ********** 2026-03-09 00:43:19.792051 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:19.792064 | orchestrator | 2026-03-09 00:43:19.792074 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.792083 | orchestrator | Monday 09 March 2026 00:43:15 +0000 (0:00:00.189) 0:00:02.542 ********** 2026-03-09 00:43:19.792092 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:19.792102 | orchestrator | 2026-03-09 00:43:19.792111 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.792120 | orchestrator | Monday 09 March 2026 00:43:15 +0000 (0:00:00.163) 0:00:02.705 ********** 2026-03-09 00:43:19.792129 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa) 2026-03-09 00:43:19.792139 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa) 2026-03-09 00:43:19.792148 | orchestrator | 2026-03-09 00:43:19.792157 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.792179 | orchestrator | Monday 09 March 2026 00:43:15 +0000 (0:00:00.360) 0:00:03.066 ********** 2026-03-09 00:43:19.792195 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_26907958-5014-4e4e-aaae-f132ebc9345b) 2026-03-09 00:43:19.792205 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_26907958-5014-4e4e-aaae-f132ebc9345b) 2026-03-09 00:43:19.792214 | orchestrator | 2026-03-09 00:43:19.792223 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.792233 | orchestrator | Monday 09 March 2026 00:43:16 +0000 (0:00:00.521) 0:00:03.587 ********** 2026-03-09 00:43:19.792242 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_763f54df-2df6-4a17-b758-6e7498448fae) 2026-03-09 00:43:19.792251 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_763f54df-2df6-4a17-b758-6e7498448fae) 2026-03-09 00:43:19.792260 | orchestrator | 2026-03-09 00:43:19.792269 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.792278 | orchestrator | Monday 09 March 2026 00:43:16 +0000 (0:00:00.521) 0:00:04.109 ********** 2026-03-09 00:43:19.792288 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_49ad7546-ef2d-4696-ae5b-c2e2e05846ff) 2026-03-09 00:43:19.792297 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_49ad7546-ef2d-4696-ae5b-c2e2e05846ff) 2026-03-09 00:43:19.792306 | orchestrator | 2026-03-09 00:43:19.792316 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:19.792326 | orchestrator | Monday 09 March 2026 00:43:17 +0000 (0:00:00.658) 0:00:04.767 ********** 2026-03-09 00:43:19.792339 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-09 00:43:19.792352 | orchestrator | 2026-03-09 00:43:19.792365 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:19.792381 | orchestrator | Monday 09 March 2026 00:43:17 +0000 (0:00:00.288) 0:00:05.056 ********** 2026-03-09 00:43:19.792400 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-09 00:43:19.792412 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-09 00:43:19.792424 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-09 00:43:19.792437 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-09 00:43:19.792448 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-09 00:43:19.792468 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-09 00:43:19.792481 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-09 00:43:19.792495 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-09 00:43:19.792529 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-09 00:43:19.792544 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-09 00:43:19.792557 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-09 00:43:19.792568 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-09 00:43:19.792581 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-09 00:43:19.792594 | orchestrator | 2026-03-09 00:43:19.792607 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:19.792620 | orchestrator | Monday 09 March 2026 00:43:18 +0000 (0:00:00.346) 0:00:05.403 ********** 2026-03-09 00:43:19.792632 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:19.792646 | orchestrator | 2026-03-09 00:43:19.792658 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:19.792671 | orchestrator | Monday 09 March 2026 00:43:18 +0000 (0:00:00.215) 0:00:05.618 ********** 2026-03-09 00:43:19.792696 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:19.792709 | orchestrator | 2026-03-09 00:43:19.792723 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:19.792736 | orchestrator | Monday 09 March 2026 00:43:18 +0000 (0:00:00.174) 0:00:05.793 ********** 2026-03-09 00:43:19.792749 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:19.792762 | orchestrator | 2026-03-09 00:43:19.792775 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:19.792789 | orchestrator | Monday 09 March 2026 00:43:18 +0000 (0:00:00.167) 0:00:05.961 ********** 2026-03-09 00:43:19.792803 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:19.792818 | orchestrator | 2026-03-09 00:43:19.792833 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:19.792847 | orchestrator | Monday 09 March 2026 00:43:19 +0000 (0:00:00.236) 0:00:06.197 ********** 2026-03-09 00:43:19.792861 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:19.792876 | orchestrator | 2026-03-09 00:43:19.792890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:19.792905 | orchestrator | Monday 09 March 2026 00:43:19 +0000 (0:00:00.281) 0:00:06.479 ********** 2026-03-09 00:43:19.792920 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:19.792934 | orchestrator | 2026-03-09 00:43:19.792949 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:19.792963 | orchestrator | Monday 09 March 2026 00:43:19 +0000 (0:00:00.222) 0:00:06.702 ********** 2026-03-09 00:43:19.792977 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:19.792991 | orchestrator | 2026-03-09 00:43:19.793017 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:27.889102 | orchestrator | Monday 09 March 2026 00:43:19 +0000 (0:00:00.224) 0:00:06.927 ********** 2026-03-09 00:43:27.889173 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:27.889180 | orchestrator | 2026-03-09 00:43:27.889185 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:27.889190 | orchestrator | Monday 09 March 2026 00:43:19 +0000 (0:00:00.214) 0:00:07.141 ********** 2026-03-09 00:43:27.889194 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-09 00:43:27.889199 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-09 00:43:27.889203 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-09 00:43:27.889207 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-09 00:43:27.889211 | orchestrator | 2026-03-09 00:43:27.889215 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:27.889219 | orchestrator | Monday 09 March 2026 00:43:20 +0000 (0:00:00.886) 0:00:08.028 ********** 2026-03-09 00:43:27.889223 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:27.889227 | orchestrator | 2026-03-09 00:43:27.889231 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:27.889235 | orchestrator | Monday 09 March 2026 00:43:21 +0000 (0:00:00.191) 0:00:08.219 ********** 2026-03-09 00:43:27.889239 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:27.889243 | orchestrator | 2026-03-09 00:43:27.889246 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:27.889250 | orchestrator | Monday 09 March 2026 00:43:21 +0000 (0:00:00.198) 0:00:08.417 ********** 2026-03-09 00:43:27.889254 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:27.889258 | orchestrator | 2026-03-09 00:43:27.889261 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:27.889265 | orchestrator | Monday 09 March 2026 00:43:21 +0000 (0:00:00.186) 0:00:08.604 ********** 2026-03-09 00:43:27.889269 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:27.889273 | orchestrator | 2026-03-09 00:43:27.889277 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-09 00:43:27.889281 | orchestrator | Monday 09 March 2026 00:43:21 +0000 (0:00:00.184) 0:00:08.788 ********** 2026-03-09 00:43:27.889284 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:27.889304 | orchestrator | 2026-03-09 00:43:27.889308 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-09 00:43:27.889312 | orchestrator | Monday 09 March 2026 00:43:21 +0000 (0:00:00.128) 0:00:08.917 ********** 2026-03-09 00:43:27.889316 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5d9cda85-a301-5b16-a7fe-308b162b7259'}}) 2026-03-09 00:43:27.889321 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8734b320-4ffe-530d-8e73-0aec819257b4'}}) 2026-03-09 00:43:27.889324 | orchestrator | 2026-03-09 00:43:27.889329 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-09 00:43:27.889332 | orchestrator | Monday 09 March 2026 00:43:21 +0000 (0:00:00.182) 0:00:09.099 ********** 2026-03-09 00:43:27.889337 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5d9cda85-a301-5b16-a7fe-308b162b7259', 'data_vg': 'ceph-5d9cda85-a301-5b16-a7fe-308b162b7259'}) 2026-03-09 00:43:27.889341 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8734b320-4ffe-530d-8e73-0aec819257b4', 'data_vg': 'ceph-8734b320-4ffe-530d-8e73-0aec819257b4'}) 2026-03-09 00:43:27.889345 | orchestrator | 2026-03-09 00:43:27.889349 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-09 00:43:27.889353 | orchestrator | Monday 09 March 2026 00:43:23 +0000 (0:00:02.015) 0:00:11.114 ********** 2026-03-09 00:43:27.889357 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d9cda85-a301-5b16-a7fe-308b162b7259', 'data_vg': 'ceph-5d9cda85-a301-5b16-a7fe-308b162b7259'})  2026-03-09 00:43:27.889362 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8734b320-4ffe-530d-8e73-0aec819257b4', 'data_vg': 'ceph-8734b320-4ffe-530d-8e73-0aec819257b4'})  2026-03-09 00:43:27.889365 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:27.889369 | orchestrator | 2026-03-09 00:43:27.889374 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-09 00:43:27.889378 | orchestrator | Monday 09 March 2026 00:43:24 +0000 (0:00:00.164) 0:00:11.279 ********** 2026-03-09 00:43:27.889381 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5d9cda85-a301-5b16-a7fe-308b162b7259', 'data_vg': 'ceph-5d9cda85-a301-5b16-a7fe-308b162b7259'}) 2026-03-09 00:43:27.889385 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8734b320-4ffe-530d-8e73-0aec819257b4', 'data_vg': 'ceph-8734b320-4ffe-530d-8e73-0aec819257b4'}) 2026-03-09 00:43:27.889389 | orchestrator | 2026-03-09 00:43:27.889404 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-09 00:43:27.889411 | orchestrator | Monday 09 March 2026 00:43:25 +0000 (0:00:01.586) 0:00:12.866 ********** 2026-03-09 00:43:27.889417 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d9cda85-a301-5b16-a7fe-308b162b7259', 'data_vg': 'ceph-5d9cda85-a301-5b16-a7fe-308b162b7259'})  2026-03-09 00:43:27.889424 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8734b320-4ffe-530d-8e73-0aec819257b4', 'data_vg': 'ceph-8734b320-4ffe-530d-8e73-0aec819257b4'})  2026-03-09 00:43:27.889429 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:27.889433 | orchestrator | 2026-03-09 00:43:27.889437 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-09 00:43:27.889441 | orchestrator | Monday 09 March 2026 00:43:25 +0000 (0:00:00.169) 0:00:13.035 ********** 2026-03-09 00:43:27.889454 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:27.889458 | orchestrator | 2026-03-09 00:43:27.889462 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-09 00:43:27.889466 | orchestrator | Monday 09 March 2026 00:43:26 +0000 (0:00:00.162) 0:00:13.197 ********** 2026-03-09 00:43:27.889470 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d9cda85-a301-5b16-a7fe-308b162b7259', 'data_vg': 'ceph-5d9cda85-a301-5b16-a7fe-308b162b7259'})  2026-03-09 00:43:27.889473 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8734b320-4ffe-530d-8e73-0aec819257b4', 'data_vg': 'ceph-8734b320-4ffe-530d-8e73-0aec819257b4'})  2026-03-09 00:43:27.889481 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:27.889501 | orchestrator | 2026-03-09 00:43:27.889547 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-09 00:43:27.889552 | orchestrator | Monday 09 March 2026 00:43:26 +0000 (0:00:00.396) 0:00:13.594 ********** 2026-03-09 00:43:27.889556 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:27.889560 | orchestrator | 2026-03-09 00:43:27.889564 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-09 00:43:27.889568 | orchestrator | Monday 09 March 2026 00:43:26 +0000 (0:00:00.143) 0:00:13.737 ********** 2026-03-09 00:43:27.889571 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d9cda85-a301-5b16-a7fe-308b162b7259', 'data_vg': 'ceph-5d9cda85-a301-5b16-a7fe-308b162b7259'})  2026-03-09 00:43:27.889575 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8734b320-4ffe-530d-8e73-0aec819257b4', 'data_vg': 'ceph-8734b320-4ffe-530d-8e73-0aec819257b4'})  2026-03-09 00:43:27.889579 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:27.889583 | orchestrator | 2026-03-09 00:43:27.889590 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-09 00:43:27.889596 | orchestrator | Monday 09 March 2026 00:43:26 +0000 (0:00:00.147) 0:00:13.885 ********** 2026-03-09 00:43:27.889604 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:27.889607 | orchestrator | 2026-03-09 00:43:27.889611 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-09 00:43:27.889615 | orchestrator | Monday 09 March 2026 00:43:26 +0000 (0:00:00.165) 0:00:14.051 ********** 2026-03-09 00:43:27.889619 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d9cda85-a301-5b16-a7fe-308b162b7259', 'data_vg': 'ceph-5d9cda85-a301-5b16-a7fe-308b162b7259'})  2026-03-09 00:43:27.889627 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8734b320-4ffe-530d-8e73-0aec819257b4', 'data_vg': 'ceph-8734b320-4ffe-530d-8e73-0aec819257b4'})  2026-03-09 00:43:27.889631 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:27.889643 | orchestrator | 2026-03-09 00:43:27.889647 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-09 00:43:27.889651 | orchestrator | Monday 09 March 2026 00:43:27 +0000 (0:00:00.182) 0:00:14.234 ********** 2026-03-09 00:43:27.889654 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:43:27.889659 | orchestrator | 2026-03-09 00:43:27.889662 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-09 00:43:27.889667 | orchestrator | Monday 09 March 2026 00:43:27 +0000 (0:00:00.133) 0:00:14.367 ********** 2026-03-09 00:43:27.889673 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d9cda85-a301-5b16-a7fe-308b162b7259', 'data_vg': 'ceph-5d9cda85-a301-5b16-a7fe-308b162b7259'})  2026-03-09 00:43:27.889680 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8734b320-4ffe-530d-8e73-0aec819257b4', 'data_vg': 'ceph-8734b320-4ffe-530d-8e73-0aec819257b4'})  2026-03-09 00:43:27.889687 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:27.889692 | orchestrator | 2026-03-09 00:43:27.889698 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-09 00:43:27.889705 | orchestrator | Monday 09 March 2026 00:43:27 +0000 (0:00:00.175) 0:00:14.543 ********** 2026-03-09 00:43:27.889711 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d9cda85-a301-5b16-a7fe-308b162b7259', 'data_vg': 'ceph-5d9cda85-a301-5b16-a7fe-308b162b7259'})  2026-03-09 00:43:27.889718 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8734b320-4ffe-530d-8e73-0aec819257b4', 'data_vg': 'ceph-8734b320-4ffe-530d-8e73-0aec819257b4'})  2026-03-09 00:43:27.889722 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:27.889727 | orchestrator | 2026-03-09 00:43:27.889731 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-09 00:43:27.889739 | orchestrator | Monday 09 March 2026 00:43:27 +0000 (0:00:00.158) 0:00:14.701 ********** 2026-03-09 00:43:27.889744 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d9cda85-a301-5b16-a7fe-308b162b7259', 'data_vg': 'ceph-5d9cda85-a301-5b16-a7fe-308b162b7259'})  2026-03-09 00:43:27.889748 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8734b320-4ffe-530d-8e73-0aec819257b4', 'data_vg': 'ceph-8734b320-4ffe-530d-8e73-0aec819257b4'})  2026-03-09 00:43:27.889753 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:27.889757 | orchestrator | 2026-03-09 00:43:27.889762 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-09 00:43:27.889767 | orchestrator | Monday 09 March 2026 00:43:27 +0000 (0:00:00.168) 0:00:14.870 ********** 2026-03-09 00:43:27.889771 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:27.889776 | orchestrator | 2026-03-09 00:43:27.889780 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-09 00:43:27.889788 | orchestrator | Monday 09 March 2026 00:43:27 +0000 (0:00:00.156) 0:00:15.026 ********** 2026-03-09 00:43:34.783018 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:34.783105 | orchestrator | 2026-03-09 00:43:34.783128 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-09 00:43:34.783148 | orchestrator | Monday 09 March 2026 00:43:28 +0000 (0:00:00.145) 0:00:15.171 ********** 2026-03-09 00:43:34.783165 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:34.783181 | orchestrator | 2026-03-09 00:43:34.783199 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-09 00:43:34.783210 | orchestrator | Monday 09 March 2026 00:43:28 +0000 (0:00:00.163) 0:00:15.335 ********** 2026-03-09 00:43:34.783220 | orchestrator | ok: [testbed-node-3] => { 2026-03-09 00:43:34.783231 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-09 00:43:34.783241 | orchestrator | } 2026-03-09 00:43:34.783252 | orchestrator | 2026-03-09 00:43:34.783262 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-09 00:43:34.783272 | orchestrator | Monday 09 March 2026 00:43:28 +0000 (0:00:00.372) 0:00:15.708 ********** 2026-03-09 00:43:34.783281 | orchestrator | ok: [testbed-node-3] => { 2026-03-09 00:43:34.783292 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-09 00:43:34.783302 | orchestrator | } 2026-03-09 00:43:34.783312 | orchestrator | 2026-03-09 00:43:34.783321 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-09 00:43:34.783331 | orchestrator | Monday 09 March 2026 00:43:28 +0000 (0:00:00.193) 0:00:15.902 ********** 2026-03-09 00:43:34.783341 | orchestrator | ok: [testbed-node-3] => { 2026-03-09 00:43:34.783351 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-09 00:43:34.783361 | orchestrator | } 2026-03-09 00:43:34.783370 | orchestrator | 2026-03-09 00:43:34.783387 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-09 00:43:34.783406 | orchestrator | Monday 09 March 2026 00:43:28 +0000 (0:00:00.166) 0:00:16.069 ********** 2026-03-09 00:43:34.783429 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:43:34.783445 | orchestrator | 2026-03-09 00:43:34.783460 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-09 00:43:34.783475 | orchestrator | Monday 09 March 2026 00:43:29 +0000 (0:00:00.762) 0:00:16.831 ********** 2026-03-09 00:43:34.783491 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:43:34.783590 | orchestrator | 2026-03-09 00:43:34.783610 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-09 00:43:34.783628 | orchestrator | Monday 09 March 2026 00:43:30 +0000 (0:00:00.547) 0:00:17.379 ********** 2026-03-09 00:43:34.783645 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:43:34.783662 | orchestrator | 2026-03-09 00:43:34.783679 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-09 00:43:34.783697 | orchestrator | Monday 09 March 2026 00:43:30 +0000 (0:00:00.513) 0:00:17.892 ********** 2026-03-09 00:43:34.783715 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:43:34.783732 | orchestrator | 2026-03-09 00:43:34.783775 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-09 00:43:34.783788 | orchestrator | Monday 09 March 2026 00:43:30 +0000 (0:00:00.159) 0:00:18.051 ********** 2026-03-09 00:43:34.783799 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:34.783810 | orchestrator | 2026-03-09 00:43:34.783836 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-09 00:43:34.783848 | orchestrator | Monday 09 March 2026 00:43:31 +0000 (0:00:00.114) 0:00:18.166 ********** 2026-03-09 00:43:34.783859 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:34.783870 | orchestrator | 2026-03-09 00:43:34.783893 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-09 00:43:34.783905 | orchestrator | Monday 09 March 2026 00:43:31 +0000 (0:00:00.129) 0:00:18.296 ********** 2026-03-09 00:43:34.783916 | orchestrator | ok: [testbed-node-3] => { 2026-03-09 00:43:34.783928 | orchestrator |  "vgs_report": { 2026-03-09 00:43:34.783939 | orchestrator |  "vg": [] 2026-03-09 00:43:34.783949 | orchestrator |  } 2026-03-09 00:43:34.783958 | orchestrator | } 2026-03-09 00:43:34.783968 | orchestrator | 2026-03-09 00:43:34.783977 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-09 00:43:34.783987 | orchestrator | Monday 09 March 2026 00:43:31 +0000 (0:00:00.154) 0:00:18.451 ********** 2026-03-09 00:43:34.783996 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:34.784006 | orchestrator | 2026-03-09 00:43:34.784015 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-09 00:43:34.784025 | orchestrator | Monday 09 March 2026 00:43:31 +0000 (0:00:00.151) 0:00:18.602 ********** 2026-03-09 00:43:34.784034 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:34.784044 | orchestrator | 2026-03-09 00:43:34.784053 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-09 00:43:34.784063 | orchestrator | Monday 09 March 2026 00:43:31 +0000 (0:00:00.157) 0:00:18.760 ********** 2026-03-09 00:43:34.784072 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:34.784082 | orchestrator | 2026-03-09 00:43:34.784092 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-09 00:43:34.784101 | orchestrator | Monday 09 March 2026 00:43:31 +0000 (0:00:00.373) 0:00:19.134 ********** 2026-03-09 00:43:34.784111 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:34.784120 | orchestrator | 2026-03-09 00:43:34.784130 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-09 00:43:34.784139 | orchestrator | Monday 09 March 2026 00:43:32 +0000 (0:00:00.142) 0:00:19.276 ********** 2026-03-09 00:43:34.784149 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:34.784158 | orchestrator | 2026-03-09 00:43:34.784168 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-09 00:43:34.784177 | orchestrator | Monday 09 March 2026 00:43:32 +0000 (0:00:00.125) 0:00:19.402 ********** 2026-03-09 00:43:34.784186 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:34.784196 | orchestrator | 2026-03-09 00:43:34.784205 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-09 00:43:34.784215 | orchestrator | Monday 09 March 2026 00:43:32 +0000 (0:00:00.166) 0:00:19.568 ********** 2026-03-09 00:43:34.784224 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:34.784234 | orchestrator | 2026-03-09 00:43:34.784243 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-09 00:43:34.784253 | orchestrator | Monday 09 March 2026 00:43:32 +0000 (0:00:00.146) 0:00:19.715 ********** 2026-03-09 00:43:34.784281 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:34.784291 | orchestrator | 2026-03-09 00:43:34.784301 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-09 00:43:34.784311 | orchestrator | Monday 09 March 2026 00:43:32 +0000 (0:00:00.151) 0:00:19.867 ********** 2026-03-09 00:43:34.784321 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:34.784330 | orchestrator | 2026-03-09 00:43:34.784340 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-09 00:43:34.784357 | orchestrator | Monday 09 March 2026 00:43:32 +0000 (0:00:00.137) 0:00:20.005 ********** 2026-03-09 00:43:34.784367 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:34.784376 | orchestrator | 2026-03-09 00:43:34.784386 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-09 00:43:34.784395 | orchestrator | Monday 09 March 2026 00:43:32 +0000 (0:00:00.130) 0:00:20.136 ********** 2026-03-09 00:43:34.784405 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:34.784415 | orchestrator | 2026-03-09 00:43:34.784441 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-09 00:43:34.784452 | orchestrator | Monday 09 March 2026 00:43:33 +0000 (0:00:00.134) 0:00:20.271 ********** 2026-03-09 00:43:34.784461 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:34.784471 | orchestrator | 2026-03-09 00:43:34.784481 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-09 00:43:34.784490 | orchestrator | Monday 09 March 2026 00:43:33 +0000 (0:00:00.162) 0:00:20.433 ********** 2026-03-09 00:43:34.784500 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:34.784538 | orchestrator | 2026-03-09 00:43:34.784555 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-09 00:43:34.784571 | orchestrator | Monday 09 March 2026 00:43:33 +0000 (0:00:00.159) 0:00:20.593 ********** 2026-03-09 00:43:34.784588 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:34.784604 | orchestrator | 2026-03-09 00:43:34.784620 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-09 00:43:34.784631 | orchestrator | Monday 09 March 2026 00:43:33 +0000 (0:00:00.141) 0:00:20.734 ********** 2026-03-09 00:43:34.784642 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d9cda85-a301-5b16-a7fe-308b162b7259', 'data_vg': 'ceph-5d9cda85-a301-5b16-a7fe-308b162b7259'})  2026-03-09 00:43:34.784654 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8734b320-4ffe-530d-8e73-0aec819257b4', 'data_vg': 'ceph-8734b320-4ffe-530d-8e73-0aec819257b4'})  2026-03-09 00:43:34.784664 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:34.784674 | orchestrator | 2026-03-09 00:43:34.784684 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-09 00:43:34.784698 | orchestrator | Monday 09 March 2026 00:43:33 +0000 (0:00:00.405) 0:00:21.140 ********** 2026-03-09 00:43:34.784708 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d9cda85-a301-5b16-a7fe-308b162b7259', 'data_vg': 'ceph-5d9cda85-a301-5b16-a7fe-308b162b7259'})  2026-03-09 00:43:34.784718 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8734b320-4ffe-530d-8e73-0aec819257b4', 'data_vg': 'ceph-8734b320-4ffe-530d-8e73-0aec819257b4'})  2026-03-09 00:43:34.784728 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:34.784737 | orchestrator | 2026-03-09 00:43:34.784747 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-09 00:43:34.784757 | orchestrator | Monday 09 March 2026 00:43:34 +0000 (0:00:00.160) 0:00:21.301 ********** 2026-03-09 00:43:34.784767 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d9cda85-a301-5b16-a7fe-308b162b7259', 'data_vg': 'ceph-5d9cda85-a301-5b16-a7fe-308b162b7259'})  2026-03-09 00:43:34.784776 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8734b320-4ffe-530d-8e73-0aec819257b4', 'data_vg': 'ceph-8734b320-4ffe-530d-8e73-0aec819257b4'})  2026-03-09 00:43:34.784786 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:34.784796 | orchestrator | 2026-03-09 00:43:34.784805 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-09 00:43:34.784815 | orchestrator | Monday 09 March 2026 00:43:34 +0000 (0:00:00.192) 0:00:21.494 ********** 2026-03-09 00:43:34.784825 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d9cda85-a301-5b16-a7fe-308b162b7259', 'data_vg': 'ceph-5d9cda85-a301-5b16-a7fe-308b162b7259'})  2026-03-09 00:43:34.784834 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8734b320-4ffe-530d-8e73-0aec819257b4', 'data_vg': 'ceph-8734b320-4ffe-530d-8e73-0aec819257b4'})  2026-03-09 00:43:34.784851 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:34.784861 | orchestrator | 2026-03-09 00:43:34.784870 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-09 00:43:34.784880 | orchestrator | Monday 09 March 2026 00:43:34 +0000 (0:00:00.169) 0:00:21.664 ********** 2026-03-09 00:43:34.784890 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d9cda85-a301-5b16-a7fe-308b162b7259', 'data_vg': 'ceph-5d9cda85-a301-5b16-a7fe-308b162b7259'})  2026-03-09 00:43:34.784900 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8734b320-4ffe-530d-8e73-0aec819257b4', 'data_vg': 'ceph-8734b320-4ffe-530d-8e73-0aec819257b4'})  2026-03-09 00:43:34.784909 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:34.784919 | orchestrator | 2026-03-09 00:43:34.784928 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-09 00:43:34.784938 | orchestrator | Monday 09 March 2026 00:43:34 +0000 (0:00:00.183) 0:00:21.847 ********** 2026-03-09 00:43:34.784956 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d9cda85-a301-5b16-a7fe-308b162b7259', 'data_vg': 'ceph-5d9cda85-a301-5b16-a7fe-308b162b7259'})  2026-03-09 00:43:40.470937 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8734b320-4ffe-530d-8e73-0aec819257b4', 'data_vg': 'ceph-8734b320-4ffe-530d-8e73-0aec819257b4'})  2026-03-09 00:43:40.471039 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:40.471055 | orchestrator | 2026-03-09 00:43:40.471068 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-09 00:43:40.471082 | orchestrator | Monday 09 March 2026 00:43:34 +0000 (0:00:00.165) 0:00:22.012 ********** 2026-03-09 00:43:40.471093 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d9cda85-a301-5b16-a7fe-308b162b7259', 'data_vg': 'ceph-5d9cda85-a301-5b16-a7fe-308b162b7259'})  2026-03-09 00:43:40.471110 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8734b320-4ffe-530d-8e73-0aec819257b4', 'data_vg': 'ceph-8734b320-4ffe-530d-8e73-0aec819257b4'})  2026-03-09 00:43:40.471131 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:40.471150 | orchestrator | 2026-03-09 00:43:40.471170 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-09 00:43:40.471191 | orchestrator | Monday 09 March 2026 00:43:35 +0000 (0:00:00.149) 0:00:22.162 ********** 2026-03-09 00:43:40.471213 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d9cda85-a301-5b16-a7fe-308b162b7259', 'data_vg': 'ceph-5d9cda85-a301-5b16-a7fe-308b162b7259'})  2026-03-09 00:43:40.471235 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8734b320-4ffe-530d-8e73-0aec819257b4', 'data_vg': 'ceph-8734b320-4ffe-530d-8e73-0aec819257b4'})  2026-03-09 00:43:40.471254 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:40.471265 | orchestrator | 2026-03-09 00:43:40.471276 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-09 00:43:40.471287 | orchestrator | Monday 09 March 2026 00:43:35 +0000 (0:00:00.157) 0:00:22.319 ********** 2026-03-09 00:43:40.471298 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:43:40.471310 | orchestrator | 2026-03-09 00:43:40.471321 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-09 00:43:40.471332 | orchestrator | Monday 09 March 2026 00:43:35 +0000 (0:00:00.583) 0:00:22.902 ********** 2026-03-09 00:43:40.471343 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:43:40.471354 | orchestrator | 2026-03-09 00:43:40.471364 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-09 00:43:40.471392 | orchestrator | Monday 09 March 2026 00:43:36 +0000 (0:00:00.539) 0:00:23.442 ********** 2026-03-09 00:43:40.471404 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:43:40.471414 | orchestrator | 2026-03-09 00:43:40.471425 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-09 00:43:40.471436 | orchestrator | Monday 09 March 2026 00:43:36 +0000 (0:00:00.189) 0:00:23.631 ********** 2026-03-09 00:43:40.471473 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-5d9cda85-a301-5b16-a7fe-308b162b7259', 'vg_name': 'ceph-5d9cda85-a301-5b16-a7fe-308b162b7259'}) 2026-03-09 00:43:40.471487 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-8734b320-4ffe-530d-8e73-0aec819257b4', 'vg_name': 'ceph-8734b320-4ffe-530d-8e73-0aec819257b4'}) 2026-03-09 00:43:40.471501 | orchestrator | 2026-03-09 00:43:40.471543 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-09 00:43:40.471556 | orchestrator | Monday 09 March 2026 00:43:36 +0000 (0:00:00.231) 0:00:23.863 ********** 2026-03-09 00:43:40.471569 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d9cda85-a301-5b16-a7fe-308b162b7259', 'data_vg': 'ceph-5d9cda85-a301-5b16-a7fe-308b162b7259'})  2026-03-09 00:43:40.471582 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8734b320-4ffe-530d-8e73-0aec819257b4', 'data_vg': 'ceph-8734b320-4ffe-530d-8e73-0aec819257b4'})  2026-03-09 00:43:40.471595 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:40.471608 | orchestrator | 2026-03-09 00:43:40.471622 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-09 00:43:40.471635 | orchestrator | Monday 09 March 2026 00:43:37 +0000 (0:00:00.377) 0:00:24.240 ********** 2026-03-09 00:43:40.471647 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d9cda85-a301-5b16-a7fe-308b162b7259', 'data_vg': 'ceph-5d9cda85-a301-5b16-a7fe-308b162b7259'})  2026-03-09 00:43:40.471660 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8734b320-4ffe-530d-8e73-0aec819257b4', 'data_vg': 'ceph-8734b320-4ffe-530d-8e73-0aec819257b4'})  2026-03-09 00:43:40.471673 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:40.471685 | orchestrator | 2026-03-09 00:43:40.471697 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-09 00:43:40.471710 | orchestrator | Monday 09 March 2026 00:43:37 +0000 (0:00:00.199) 0:00:24.440 ********** 2026-03-09 00:43:40.471721 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d9cda85-a301-5b16-a7fe-308b162b7259', 'data_vg': 'ceph-5d9cda85-a301-5b16-a7fe-308b162b7259'})  2026-03-09 00:43:40.471732 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8734b320-4ffe-530d-8e73-0aec819257b4', 'data_vg': 'ceph-8734b320-4ffe-530d-8e73-0aec819257b4'})  2026-03-09 00:43:40.471743 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:43:40.471754 | orchestrator | 2026-03-09 00:43:40.471765 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-09 00:43:40.471776 | orchestrator | Monday 09 March 2026 00:43:37 +0000 (0:00:00.158) 0:00:24.599 ********** 2026-03-09 00:43:40.471804 | orchestrator | ok: [testbed-node-3] => { 2026-03-09 00:43:40.471816 | orchestrator |  "lvm_report": { 2026-03-09 00:43:40.471828 | orchestrator |  "lv": [ 2026-03-09 00:43:40.471838 | orchestrator |  { 2026-03-09 00:43:40.471850 | orchestrator |  "lv_name": "osd-block-5d9cda85-a301-5b16-a7fe-308b162b7259", 2026-03-09 00:43:40.471861 | orchestrator |  "vg_name": "ceph-5d9cda85-a301-5b16-a7fe-308b162b7259" 2026-03-09 00:43:40.471872 | orchestrator |  }, 2026-03-09 00:43:40.471883 | orchestrator |  { 2026-03-09 00:43:40.471894 | orchestrator |  "lv_name": "osd-block-8734b320-4ffe-530d-8e73-0aec819257b4", 2026-03-09 00:43:40.471905 | orchestrator |  "vg_name": "ceph-8734b320-4ffe-530d-8e73-0aec819257b4" 2026-03-09 00:43:40.471916 | orchestrator |  } 2026-03-09 00:43:40.471927 | orchestrator |  ], 2026-03-09 00:43:40.471938 | orchestrator |  "pv": [ 2026-03-09 00:43:40.471949 | orchestrator |  { 2026-03-09 00:43:40.471960 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-09 00:43:40.471971 | orchestrator |  "vg_name": "ceph-5d9cda85-a301-5b16-a7fe-308b162b7259" 2026-03-09 00:43:40.471982 | orchestrator |  }, 2026-03-09 00:43:40.471993 | orchestrator |  { 2026-03-09 00:43:40.472012 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-09 00:43:40.472023 | orchestrator |  "vg_name": "ceph-8734b320-4ffe-530d-8e73-0aec819257b4" 2026-03-09 00:43:40.472034 | orchestrator |  } 2026-03-09 00:43:40.472045 | orchestrator |  ] 2026-03-09 00:43:40.472056 | orchestrator |  } 2026-03-09 00:43:40.472067 | orchestrator | } 2026-03-09 00:43:40.472078 | orchestrator | 2026-03-09 00:43:40.472089 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-09 00:43:40.472101 | orchestrator | 2026-03-09 00:43:40.472119 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-09 00:43:40.472138 | orchestrator | Monday 09 March 2026 00:43:37 +0000 (0:00:00.310) 0:00:24.909 ********** 2026-03-09 00:43:40.472156 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-09 00:43:40.472174 | orchestrator | 2026-03-09 00:43:40.472193 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-09 00:43:40.472211 | orchestrator | Monday 09 March 2026 00:43:38 +0000 (0:00:00.316) 0:00:25.225 ********** 2026-03-09 00:43:40.472231 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:43:40.472250 | orchestrator | 2026-03-09 00:43:40.472268 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:40.472286 | orchestrator | Monday 09 March 2026 00:43:38 +0000 (0:00:00.266) 0:00:25.492 ********** 2026-03-09 00:43:40.472298 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-09 00:43:40.472309 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-09 00:43:40.472320 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-09 00:43:40.472330 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-09 00:43:40.472341 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-09 00:43:40.472352 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-09 00:43:40.472363 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-09 00:43:40.472374 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-09 00:43:40.472384 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-09 00:43:40.472395 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-09 00:43:40.472405 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-09 00:43:40.472416 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-09 00:43:40.472427 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-09 00:43:40.472437 | orchestrator | 2026-03-09 00:43:40.472448 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:40.472458 | orchestrator | Monday 09 March 2026 00:43:38 +0000 (0:00:00.474) 0:00:25.966 ********** 2026-03-09 00:43:40.472469 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:40.472480 | orchestrator | 2026-03-09 00:43:40.472491 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:40.472535 | orchestrator | Monday 09 March 2026 00:43:39 +0000 (0:00:00.207) 0:00:26.174 ********** 2026-03-09 00:43:40.472548 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:40.472559 | orchestrator | 2026-03-09 00:43:40.472570 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:40.472581 | orchestrator | Monday 09 March 2026 00:43:39 +0000 (0:00:00.215) 0:00:26.390 ********** 2026-03-09 00:43:40.472591 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:40.472602 | orchestrator | 2026-03-09 00:43:40.472613 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:40.472632 | orchestrator | Monday 09 March 2026 00:43:39 +0000 (0:00:00.612) 0:00:27.003 ********** 2026-03-09 00:43:40.472643 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:40.472653 | orchestrator | 2026-03-09 00:43:40.472664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:40.472675 | orchestrator | Monday 09 March 2026 00:43:40 +0000 (0:00:00.207) 0:00:27.211 ********** 2026-03-09 00:43:40.472685 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:40.472696 | orchestrator | 2026-03-09 00:43:40.472707 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:40.472718 | orchestrator | Monday 09 March 2026 00:43:40 +0000 (0:00:00.201) 0:00:27.412 ********** 2026-03-09 00:43:40.472729 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:40.472740 | orchestrator | 2026-03-09 00:43:40.472759 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:52.070375 | orchestrator | Monday 09 March 2026 00:43:40 +0000 (0:00:00.196) 0:00:27.609 ********** 2026-03-09 00:43:52.070451 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:52.070460 | orchestrator | 2026-03-09 00:43:52.070467 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:52.070474 | orchestrator | Monday 09 March 2026 00:43:40 +0000 (0:00:00.196) 0:00:27.805 ********** 2026-03-09 00:43:52.070483 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:52.070493 | orchestrator | 2026-03-09 00:43:52.070501 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:52.070557 | orchestrator | Monday 09 March 2026 00:43:40 +0000 (0:00:00.229) 0:00:28.035 ********** 2026-03-09 00:43:52.070566 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2) 2026-03-09 00:43:52.070576 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2) 2026-03-09 00:43:52.070584 | orchestrator | 2026-03-09 00:43:52.070593 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:52.070601 | orchestrator | Monday 09 March 2026 00:43:41 +0000 (0:00:00.411) 0:00:28.446 ********** 2026-03-09 00:43:52.070610 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_11658218-3952-45bc-99ae-d48f4d257268) 2026-03-09 00:43:52.070618 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_11658218-3952-45bc-99ae-d48f4d257268) 2026-03-09 00:43:52.070627 | orchestrator | 2026-03-09 00:43:52.070635 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:52.070644 | orchestrator | Monday 09 March 2026 00:43:41 +0000 (0:00:00.459) 0:00:28.905 ********** 2026-03-09 00:43:52.070653 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d43c938e-9c3c-4e95-bc09-26edff92b810) 2026-03-09 00:43:52.070661 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d43c938e-9c3c-4e95-bc09-26edff92b810) 2026-03-09 00:43:52.070670 | orchestrator | 2026-03-09 00:43:52.070678 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:52.070687 | orchestrator | Monday 09 March 2026 00:43:42 +0000 (0:00:00.439) 0:00:29.344 ********** 2026-03-09 00:43:52.070712 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3a13d83a-3534-4183-8691-9f150495a6dc) 2026-03-09 00:43:52.070719 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3a13d83a-3534-4183-8691-9f150495a6dc) 2026-03-09 00:43:52.070724 | orchestrator | 2026-03-09 00:43:52.070730 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:43:52.070735 | orchestrator | Monday 09 March 2026 00:43:42 +0000 (0:00:00.619) 0:00:29.964 ********** 2026-03-09 00:43:52.070740 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-09 00:43:52.070746 | orchestrator | 2026-03-09 00:43:52.070751 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:52.070756 | orchestrator | Monday 09 March 2026 00:43:43 +0000 (0:00:00.567) 0:00:30.532 ********** 2026-03-09 00:43:52.070779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-09 00:43:52.070785 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-09 00:43:52.070790 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-09 00:43:52.070796 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-09 00:43:52.070801 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-09 00:43:52.070806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-09 00:43:52.070811 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-09 00:43:52.070816 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-09 00:43:52.070821 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-09 00:43:52.070826 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-09 00:43:52.070831 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-09 00:43:52.070836 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-09 00:43:52.070841 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-09 00:43:52.070847 | orchestrator | 2026-03-09 00:43:52.070852 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:52.070857 | orchestrator | Monday 09 March 2026 00:43:44 +0000 (0:00:00.938) 0:00:31.470 ********** 2026-03-09 00:43:52.070862 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:52.070868 | orchestrator | 2026-03-09 00:43:52.070873 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:52.070878 | orchestrator | Monday 09 March 2026 00:43:44 +0000 (0:00:00.201) 0:00:31.672 ********** 2026-03-09 00:43:52.070883 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:52.070888 | orchestrator | 2026-03-09 00:43:52.070893 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:52.070899 | orchestrator | Monday 09 March 2026 00:43:44 +0000 (0:00:00.221) 0:00:31.894 ********** 2026-03-09 00:43:52.070904 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:52.070909 | orchestrator | 2026-03-09 00:43:52.070928 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:52.070934 | orchestrator | Monday 09 March 2026 00:43:44 +0000 (0:00:00.209) 0:00:32.104 ********** 2026-03-09 00:43:52.070940 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:52.070946 | orchestrator | 2026-03-09 00:43:52.070953 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:52.070959 | orchestrator | Monday 09 March 2026 00:43:45 +0000 (0:00:00.201) 0:00:32.305 ********** 2026-03-09 00:43:52.070965 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:52.070971 | orchestrator | 2026-03-09 00:43:52.070977 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:52.070983 | orchestrator | Monday 09 March 2026 00:43:45 +0000 (0:00:00.204) 0:00:32.510 ********** 2026-03-09 00:43:52.070989 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:52.070995 | orchestrator | 2026-03-09 00:43:52.071001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:52.071008 | orchestrator | Monday 09 March 2026 00:43:45 +0000 (0:00:00.206) 0:00:32.717 ********** 2026-03-09 00:43:52.071015 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:52.071023 | orchestrator | 2026-03-09 00:43:52.071032 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:52.071040 | orchestrator | Monday 09 March 2026 00:43:45 +0000 (0:00:00.228) 0:00:32.945 ********** 2026-03-09 00:43:52.071054 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:52.071064 | orchestrator | 2026-03-09 00:43:52.071073 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:52.071082 | orchestrator | Monday 09 March 2026 00:43:46 +0000 (0:00:00.200) 0:00:33.145 ********** 2026-03-09 00:43:52.071090 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-09 00:43:52.071099 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-09 00:43:52.071109 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-09 00:43:52.071118 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-09 00:43:52.071124 | orchestrator | 2026-03-09 00:43:52.071131 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:52.071137 | orchestrator | Monday 09 March 2026 00:43:46 +0000 (0:00:00.886) 0:00:34.032 ********** 2026-03-09 00:43:52.071143 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:52.071149 | orchestrator | 2026-03-09 00:43:52.071155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:52.071161 | orchestrator | Monday 09 March 2026 00:43:47 +0000 (0:00:00.231) 0:00:34.263 ********** 2026-03-09 00:43:52.071171 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:52.071177 | orchestrator | 2026-03-09 00:43:52.071184 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:52.071190 | orchestrator | Monday 09 March 2026 00:43:47 +0000 (0:00:00.706) 0:00:34.970 ********** 2026-03-09 00:43:52.071196 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:52.071202 | orchestrator | 2026-03-09 00:43:52.071208 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:43:52.071214 | orchestrator | Monday 09 March 2026 00:43:48 +0000 (0:00:00.207) 0:00:35.177 ********** 2026-03-09 00:43:52.071220 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:52.071226 | orchestrator | 2026-03-09 00:43:52.071232 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-09 00:43:52.071238 | orchestrator | Monday 09 March 2026 00:43:48 +0000 (0:00:00.273) 0:00:35.450 ********** 2026-03-09 00:43:52.071244 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:52.071250 | orchestrator | 2026-03-09 00:43:52.071256 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-09 00:43:52.071262 | orchestrator | Monday 09 March 2026 00:43:48 +0000 (0:00:00.131) 0:00:35.582 ********** 2026-03-09 00:43:52.071268 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'deb603ca-2db3-5399-8e8d-1e0d01641e0c'}}) 2026-03-09 00:43:52.071274 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c1f67558-6290-50a7-9c09-ea5e74fb08ab'}}) 2026-03-09 00:43:52.071281 | orchestrator | 2026-03-09 00:43:52.071286 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-09 00:43:52.071293 | orchestrator | Monday 09 March 2026 00:43:48 +0000 (0:00:00.186) 0:00:35.768 ********** 2026-03-09 00:43:52.071300 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-deb603ca-2db3-5399-8e8d-1e0d01641e0c', 'data_vg': 'ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c'}) 2026-03-09 00:43:52.071308 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c1f67558-6290-50a7-9c09-ea5e74fb08ab', 'data_vg': 'ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab'}) 2026-03-09 00:43:52.071313 | orchestrator | 2026-03-09 00:43:52.071318 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-09 00:43:52.071324 | orchestrator | Monday 09 March 2026 00:43:50 +0000 (0:00:01.946) 0:00:37.715 ********** 2026-03-09 00:43:52.071329 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-deb603ca-2db3-5399-8e8d-1e0d01641e0c', 'data_vg': 'ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c'})  2026-03-09 00:43:52.071335 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f67558-6290-50a7-9c09-ea5e74fb08ab', 'data_vg': 'ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab'})  2026-03-09 00:43:52.071345 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:52.071350 | orchestrator | 2026-03-09 00:43:52.071355 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-09 00:43:52.071360 | orchestrator | Monday 09 March 2026 00:43:50 +0000 (0:00:00.150) 0:00:37.866 ********** 2026-03-09 00:43:52.071365 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-deb603ca-2db3-5399-8e8d-1e0d01641e0c', 'data_vg': 'ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c'}) 2026-03-09 00:43:52.071375 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c1f67558-6290-50a7-9c09-ea5e74fb08ab', 'data_vg': 'ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab'}) 2026-03-09 00:43:57.658625 | orchestrator | 2026-03-09 00:43:57.658712 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-09 00:43:57.658729 | orchestrator | Monday 09 March 2026 00:43:52 +0000 (0:00:01.425) 0:00:39.292 ********** 2026-03-09 00:43:57.658740 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-deb603ca-2db3-5399-8e8d-1e0d01641e0c', 'data_vg': 'ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c'})  2026-03-09 00:43:57.658752 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f67558-6290-50a7-9c09-ea5e74fb08ab', 'data_vg': 'ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab'})  2026-03-09 00:43:57.658763 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:57.658773 | orchestrator | 2026-03-09 00:43:57.658784 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-09 00:43:57.658795 | orchestrator | Monday 09 March 2026 00:43:52 +0000 (0:00:00.167) 0:00:39.459 ********** 2026-03-09 00:43:57.658805 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:57.658815 | orchestrator | 2026-03-09 00:43:57.658825 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-09 00:43:57.658836 | orchestrator | Monday 09 March 2026 00:43:52 +0000 (0:00:00.127) 0:00:39.586 ********** 2026-03-09 00:43:57.658846 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-deb603ca-2db3-5399-8e8d-1e0d01641e0c', 'data_vg': 'ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c'})  2026-03-09 00:43:57.658857 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f67558-6290-50a7-9c09-ea5e74fb08ab', 'data_vg': 'ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab'})  2026-03-09 00:43:57.658866 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:57.658873 | orchestrator | 2026-03-09 00:43:57.658879 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-09 00:43:57.658886 | orchestrator | Monday 09 March 2026 00:43:52 +0000 (0:00:00.157) 0:00:39.744 ********** 2026-03-09 00:43:57.658892 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:57.658898 | orchestrator | 2026-03-09 00:43:57.658904 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-09 00:43:57.658911 | orchestrator | Monday 09 March 2026 00:43:52 +0000 (0:00:00.145) 0:00:39.889 ********** 2026-03-09 00:43:57.658917 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-deb603ca-2db3-5399-8e8d-1e0d01641e0c', 'data_vg': 'ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c'})  2026-03-09 00:43:57.658923 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f67558-6290-50a7-9c09-ea5e74fb08ab', 'data_vg': 'ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab'})  2026-03-09 00:43:57.658930 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:57.658936 | orchestrator | 2026-03-09 00:43:57.658942 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-09 00:43:57.658948 | orchestrator | Monday 09 March 2026 00:43:53 +0000 (0:00:00.364) 0:00:40.254 ********** 2026-03-09 00:43:57.658954 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:57.658960 | orchestrator | 2026-03-09 00:43:57.658967 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-09 00:43:57.658973 | orchestrator | Monday 09 March 2026 00:43:53 +0000 (0:00:00.143) 0:00:40.397 ********** 2026-03-09 00:43:57.658979 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-deb603ca-2db3-5399-8e8d-1e0d01641e0c', 'data_vg': 'ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c'})  2026-03-09 00:43:57.659004 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f67558-6290-50a7-9c09-ea5e74fb08ab', 'data_vg': 'ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab'})  2026-03-09 00:43:57.659010 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:57.659017 | orchestrator | 2026-03-09 00:43:57.659023 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-09 00:43:57.659044 | orchestrator | Monday 09 March 2026 00:43:53 +0000 (0:00:00.173) 0:00:40.571 ********** 2026-03-09 00:43:57.659051 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:43:57.659058 | orchestrator | 2026-03-09 00:43:57.659064 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-09 00:43:57.659070 | orchestrator | Monday 09 March 2026 00:43:53 +0000 (0:00:00.143) 0:00:40.715 ********** 2026-03-09 00:43:57.659077 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-deb603ca-2db3-5399-8e8d-1e0d01641e0c', 'data_vg': 'ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c'})  2026-03-09 00:43:57.659083 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f67558-6290-50a7-9c09-ea5e74fb08ab', 'data_vg': 'ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab'})  2026-03-09 00:43:57.659089 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:57.659095 | orchestrator | 2026-03-09 00:43:57.659102 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-09 00:43:57.659108 | orchestrator | Monday 09 March 2026 00:43:53 +0000 (0:00:00.148) 0:00:40.863 ********** 2026-03-09 00:43:57.659114 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-deb603ca-2db3-5399-8e8d-1e0d01641e0c', 'data_vg': 'ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c'})  2026-03-09 00:43:57.659120 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f67558-6290-50a7-9c09-ea5e74fb08ab', 'data_vg': 'ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab'})  2026-03-09 00:43:57.659127 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:57.659133 | orchestrator | 2026-03-09 00:43:57.659139 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-09 00:43:57.659158 | orchestrator | Monday 09 March 2026 00:43:53 +0000 (0:00:00.163) 0:00:41.026 ********** 2026-03-09 00:43:57.659167 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-deb603ca-2db3-5399-8e8d-1e0d01641e0c', 'data_vg': 'ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c'})  2026-03-09 00:43:57.659174 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f67558-6290-50a7-9c09-ea5e74fb08ab', 'data_vg': 'ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab'})  2026-03-09 00:43:57.659181 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:57.659188 | orchestrator | 2026-03-09 00:43:57.659195 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-09 00:43:57.659202 | orchestrator | Monday 09 March 2026 00:43:54 +0000 (0:00:00.159) 0:00:41.186 ********** 2026-03-09 00:43:57.659210 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:57.659217 | orchestrator | 2026-03-09 00:43:57.659224 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-09 00:43:57.659231 | orchestrator | Monday 09 March 2026 00:43:54 +0000 (0:00:00.136) 0:00:41.323 ********** 2026-03-09 00:43:57.659238 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:57.659245 | orchestrator | 2026-03-09 00:43:57.659252 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-09 00:43:57.659259 | orchestrator | Monday 09 March 2026 00:43:54 +0000 (0:00:00.147) 0:00:41.470 ********** 2026-03-09 00:43:57.659267 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:57.659274 | orchestrator | 2026-03-09 00:43:57.659281 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-09 00:43:57.659288 | orchestrator | Monday 09 March 2026 00:43:54 +0000 (0:00:00.136) 0:00:41.607 ********** 2026-03-09 00:43:57.659295 | orchestrator | ok: [testbed-node-4] => { 2026-03-09 00:43:57.659303 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-09 00:43:57.659316 | orchestrator | } 2026-03-09 00:43:57.659324 | orchestrator | 2026-03-09 00:43:57.659331 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-09 00:43:57.659338 | orchestrator | Monday 09 March 2026 00:43:54 +0000 (0:00:00.140) 0:00:41.747 ********** 2026-03-09 00:43:57.659350 | orchestrator | ok: [testbed-node-4] => { 2026-03-09 00:43:57.659363 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-09 00:43:57.659376 | orchestrator | } 2026-03-09 00:43:57.659388 | orchestrator | 2026-03-09 00:43:57.659406 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-09 00:43:57.659420 | orchestrator | Monday 09 March 2026 00:43:54 +0000 (0:00:00.134) 0:00:41.882 ********** 2026-03-09 00:43:57.659432 | orchestrator | ok: [testbed-node-4] => { 2026-03-09 00:43:57.659443 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-09 00:43:57.659455 | orchestrator | } 2026-03-09 00:43:57.659466 | orchestrator | 2026-03-09 00:43:57.659477 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-09 00:43:57.659488 | orchestrator | Monday 09 March 2026 00:43:55 +0000 (0:00:00.334) 0:00:42.217 ********** 2026-03-09 00:43:57.659499 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:43:57.659535 | orchestrator | 2026-03-09 00:43:57.659547 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-09 00:43:57.659558 | orchestrator | Monday 09 March 2026 00:43:55 +0000 (0:00:00.528) 0:00:42.745 ********** 2026-03-09 00:43:57.659569 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:43:57.659579 | orchestrator | 2026-03-09 00:43:57.659590 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-09 00:43:57.659601 | orchestrator | Monday 09 March 2026 00:43:56 +0000 (0:00:00.540) 0:00:43.286 ********** 2026-03-09 00:43:57.659612 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:43:57.659623 | orchestrator | 2026-03-09 00:43:57.659634 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-09 00:43:57.659645 | orchestrator | Monday 09 March 2026 00:43:56 +0000 (0:00:00.518) 0:00:43.804 ********** 2026-03-09 00:43:57.659656 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:43:57.659667 | orchestrator | 2026-03-09 00:43:57.659677 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-09 00:43:57.659688 | orchestrator | Monday 09 March 2026 00:43:56 +0000 (0:00:00.138) 0:00:43.942 ********** 2026-03-09 00:43:57.659699 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:57.659710 | orchestrator | 2026-03-09 00:43:57.659721 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-09 00:43:57.659732 | orchestrator | Monday 09 March 2026 00:43:56 +0000 (0:00:00.110) 0:00:44.053 ********** 2026-03-09 00:43:57.659743 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:57.659753 | orchestrator | 2026-03-09 00:43:57.659764 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-09 00:43:57.659775 | orchestrator | Monday 09 March 2026 00:43:57 +0000 (0:00:00.104) 0:00:44.158 ********** 2026-03-09 00:43:57.659786 | orchestrator | ok: [testbed-node-4] => { 2026-03-09 00:43:57.659796 | orchestrator |  "vgs_report": { 2026-03-09 00:43:57.659808 | orchestrator |  "vg": [] 2026-03-09 00:43:57.659819 | orchestrator |  } 2026-03-09 00:43:57.659830 | orchestrator | } 2026-03-09 00:43:57.659841 | orchestrator | 2026-03-09 00:43:57.659852 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-09 00:43:57.659863 | orchestrator | Monday 09 March 2026 00:43:57 +0000 (0:00:00.124) 0:00:44.283 ********** 2026-03-09 00:43:57.659874 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:57.659884 | orchestrator | 2026-03-09 00:43:57.659895 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-09 00:43:57.659906 | orchestrator | Monday 09 March 2026 00:43:57 +0000 (0:00:00.126) 0:00:44.409 ********** 2026-03-09 00:43:57.659917 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:57.659927 | orchestrator | 2026-03-09 00:43:57.659938 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-09 00:43:57.659956 | orchestrator | Monday 09 March 2026 00:43:57 +0000 (0:00:00.131) 0:00:44.541 ********** 2026-03-09 00:43:57.659967 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:57.659978 | orchestrator | 2026-03-09 00:43:57.659989 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-09 00:43:57.660000 | orchestrator | Monday 09 March 2026 00:43:57 +0000 (0:00:00.128) 0:00:44.669 ********** 2026-03-09 00:43:57.660011 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:43:57.660022 | orchestrator | 2026-03-09 00:43:57.660039 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-09 00:44:02.509538 | orchestrator | Monday 09 March 2026 00:43:57 +0000 (0:00:00.127) 0:00:44.797 ********** 2026-03-09 00:44:02.509640 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:02.509655 | orchestrator | 2026-03-09 00:44:02.509668 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-09 00:44:02.509679 | orchestrator | Monday 09 March 2026 00:43:57 +0000 (0:00:00.267) 0:00:45.065 ********** 2026-03-09 00:44:02.509691 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:02.509702 | orchestrator | 2026-03-09 00:44:02.509714 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-09 00:44:02.509725 | orchestrator | Monday 09 March 2026 00:43:58 +0000 (0:00:00.122) 0:00:45.188 ********** 2026-03-09 00:44:02.509736 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:02.509748 | orchestrator | 2026-03-09 00:44:02.509759 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-09 00:44:02.509770 | orchestrator | Monday 09 March 2026 00:43:58 +0000 (0:00:00.129) 0:00:45.317 ********** 2026-03-09 00:44:02.509781 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:02.509792 | orchestrator | 2026-03-09 00:44:02.509803 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-09 00:44:02.509815 | orchestrator | Monday 09 March 2026 00:43:58 +0000 (0:00:00.146) 0:00:45.464 ********** 2026-03-09 00:44:02.509826 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:02.509837 | orchestrator | 2026-03-09 00:44:02.509848 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-09 00:44:02.509860 | orchestrator | Monday 09 March 2026 00:43:58 +0000 (0:00:00.168) 0:00:45.632 ********** 2026-03-09 00:44:02.509871 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:02.509882 | orchestrator | 2026-03-09 00:44:02.509894 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-09 00:44:02.509905 | orchestrator | Monday 09 March 2026 00:43:58 +0000 (0:00:00.162) 0:00:45.795 ********** 2026-03-09 00:44:02.509916 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:02.509927 | orchestrator | 2026-03-09 00:44:02.509939 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-09 00:44:02.509950 | orchestrator | Monday 09 March 2026 00:43:58 +0000 (0:00:00.174) 0:00:45.970 ********** 2026-03-09 00:44:02.509979 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:02.509991 | orchestrator | 2026-03-09 00:44:02.510002 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-09 00:44:02.510014 | orchestrator | Monday 09 March 2026 00:43:58 +0000 (0:00:00.152) 0:00:46.122 ********** 2026-03-09 00:44:02.510082 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:02.510097 | orchestrator | 2026-03-09 00:44:02.510111 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-09 00:44:02.510125 | orchestrator | Monday 09 March 2026 00:43:59 +0000 (0:00:00.135) 0:00:46.258 ********** 2026-03-09 00:44:02.510139 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:02.510151 | orchestrator | 2026-03-09 00:44:02.510164 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-09 00:44:02.510178 | orchestrator | Monday 09 March 2026 00:43:59 +0000 (0:00:00.139) 0:00:46.397 ********** 2026-03-09 00:44:02.510193 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-deb603ca-2db3-5399-8e8d-1e0d01641e0c', 'data_vg': 'ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c'})  2026-03-09 00:44:02.510230 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f67558-6290-50a7-9c09-ea5e74fb08ab', 'data_vg': 'ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab'})  2026-03-09 00:44:02.510242 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:02.510253 | orchestrator | 2026-03-09 00:44:02.510264 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-09 00:44:02.510276 | orchestrator | Monday 09 March 2026 00:43:59 +0000 (0:00:00.163) 0:00:46.561 ********** 2026-03-09 00:44:02.510287 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-deb603ca-2db3-5399-8e8d-1e0d01641e0c', 'data_vg': 'ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c'})  2026-03-09 00:44:02.510298 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f67558-6290-50a7-9c09-ea5e74fb08ab', 'data_vg': 'ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab'})  2026-03-09 00:44:02.510310 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:02.510321 | orchestrator | 2026-03-09 00:44:02.510332 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-09 00:44:02.510343 | orchestrator | Monday 09 March 2026 00:43:59 +0000 (0:00:00.166) 0:00:46.727 ********** 2026-03-09 00:44:02.510354 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-deb603ca-2db3-5399-8e8d-1e0d01641e0c', 'data_vg': 'ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c'})  2026-03-09 00:44:02.510365 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f67558-6290-50a7-9c09-ea5e74fb08ab', 'data_vg': 'ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab'})  2026-03-09 00:44:02.510376 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:02.510387 | orchestrator | 2026-03-09 00:44:02.510398 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-09 00:44:02.510409 | orchestrator | Monday 09 March 2026 00:43:59 +0000 (0:00:00.393) 0:00:47.121 ********** 2026-03-09 00:44:02.510420 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-deb603ca-2db3-5399-8e8d-1e0d01641e0c', 'data_vg': 'ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c'})  2026-03-09 00:44:02.510431 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f67558-6290-50a7-9c09-ea5e74fb08ab', 'data_vg': 'ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab'})  2026-03-09 00:44:02.510442 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:02.510453 | orchestrator | 2026-03-09 00:44:02.510480 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-09 00:44:02.510492 | orchestrator | Monday 09 March 2026 00:44:00 +0000 (0:00:00.154) 0:00:47.275 ********** 2026-03-09 00:44:02.510524 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-deb603ca-2db3-5399-8e8d-1e0d01641e0c', 'data_vg': 'ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c'})  2026-03-09 00:44:02.510536 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f67558-6290-50a7-9c09-ea5e74fb08ab', 'data_vg': 'ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab'})  2026-03-09 00:44:02.510548 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:02.510559 | orchestrator | 2026-03-09 00:44:02.510570 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-09 00:44:02.510581 | orchestrator | Monday 09 March 2026 00:44:00 +0000 (0:00:00.165) 0:00:47.441 ********** 2026-03-09 00:44:02.510592 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-deb603ca-2db3-5399-8e8d-1e0d01641e0c', 'data_vg': 'ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c'})  2026-03-09 00:44:02.510604 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f67558-6290-50a7-9c09-ea5e74fb08ab', 'data_vg': 'ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab'})  2026-03-09 00:44:02.510615 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:02.510626 | orchestrator | 2026-03-09 00:44:02.510637 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-09 00:44:02.510648 | orchestrator | Monday 09 March 2026 00:44:00 +0000 (0:00:00.148) 0:00:47.589 ********** 2026-03-09 00:44:02.510659 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-deb603ca-2db3-5399-8e8d-1e0d01641e0c', 'data_vg': 'ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c'})  2026-03-09 00:44:02.510679 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f67558-6290-50a7-9c09-ea5e74fb08ab', 'data_vg': 'ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab'})  2026-03-09 00:44:02.510690 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:02.510701 | orchestrator | 2026-03-09 00:44:02.510712 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-09 00:44:02.510723 | orchestrator | Monday 09 March 2026 00:44:00 +0000 (0:00:00.167) 0:00:47.757 ********** 2026-03-09 00:44:02.510734 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-deb603ca-2db3-5399-8e8d-1e0d01641e0c', 'data_vg': 'ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c'})  2026-03-09 00:44:02.510745 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f67558-6290-50a7-9c09-ea5e74fb08ab', 'data_vg': 'ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab'})  2026-03-09 00:44:02.510756 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:02.510767 | orchestrator | 2026-03-09 00:44:02.510778 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-09 00:44:02.510789 | orchestrator | Monday 09 March 2026 00:44:00 +0000 (0:00:00.154) 0:00:47.911 ********** 2026-03-09 00:44:02.510800 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:44:02.510811 | orchestrator | 2026-03-09 00:44:02.510822 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-09 00:44:02.510833 | orchestrator | Monday 09 March 2026 00:44:01 +0000 (0:00:00.627) 0:00:48.539 ********** 2026-03-09 00:44:02.510844 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:44:02.510855 | orchestrator | 2026-03-09 00:44:02.510866 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-09 00:44:02.510876 | orchestrator | Monday 09 March 2026 00:44:01 +0000 (0:00:00.554) 0:00:49.094 ********** 2026-03-09 00:44:02.510887 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:44:02.510898 | orchestrator | 2026-03-09 00:44:02.510909 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-09 00:44:02.510920 | orchestrator | Monday 09 March 2026 00:44:02 +0000 (0:00:00.146) 0:00:49.240 ********** 2026-03-09 00:44:02.510931 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-c1f67558-6290-50a7-9c09-ea5e74fb08ab', 'vg_name': 'ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab'}) 2026-03-09 00:44:02.510944 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-deb603ca-2db3-5399-8e8d-1e0d01641e0c', 'vg_name': 'ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c'}) 2026-03-09 00:44:02.510955 | orchestrator | 2026-03-09 00:44:02.510966 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-09 00:44:02.510976 | orchestrator | Monday 09 March 2026 00:44:02 +0000 (0:00:00.165) 0:00:49.405 ********** 2026-03-09 00:44:02.510987 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-deb603ca-2db3-5399-8e8d-1e0d01641e0c', 'data_vg': 'ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c'})  2026-03-09 00:44:02.510999 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f67558-6290-50a7-9c09-ea5e74fb08ab', 'data_vg': 'ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab'})  2026-03-09 00:44:02.511010 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:02.511020 | orchestrator | 2026-03-09 00:44:02.511031 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-09 00:44:02.511042 | orchestrator | Monday 09 March 2026 00:44:02 +0000 (0:00:00.175) 0:00:49.581 ********** 2026-03-09 00:44:02.511053 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-deb603ca-2db3-5399-8e8d-1e0d01641e0c', 'data_vg': 'ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c'})  2026-03-09 00:44:02.511071 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f67558-6290-50a7-9c09-ea5e74fb08ab', 'data_vg': 'ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab'})  2026-03-09 00:44:08.610175 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:08.610304 | orchestrator | 2026-03-09 00:44:08.610323 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-09 00:44:08.610337 | orchestrator | Monday 09 March 2026 00:44:02 +0000 (0:00:00.161) 0:00:49.743 ********** 2026-03-09 00:44:08.610349 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-deb603ca-2db3-5399-8e8d-1e0d01641e0c', 'data_vg': 'ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c'})  2026-03-09 00:44:08.610362 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c1f67558-6290-50a7-9c09-ea5e74fb08ab', 'data_vg': 'ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab'})  2026-03-09 00:44:08.610373 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:08.610384 | orchestrator | 2026-03-09 00:44:08.610396 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-09 00:44:08.610407 | orchestrator | Monday 09 March 2026 00:44:02 +0000 (0:00:00.157) 0:00:49.901 ********** 2026-03-09 00:44:08.610419 | orchestrator | ok: [testbed-node-4] => { 2026-03-09 00:44:08.610439 | orchestrator |  "lvm_report": { 2026-03-09 00:44:08.610459 | orchestrator |  "lv": [ 2026-03-09 00:44:08.610476 | orchestrator |  { 2026-03-09 00:44:08.610494 | orchestrator |  "lv_name": "osd-block-c1f67558-6290-50a7-9c09-ea5e74fb08ab", 2026-03-09 00:44:08.610544 | orchestrator |  "vg_name": "ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab" 2026-03-09 00:44:08.610561 | orchestrator |  }, 2026-03-09 00:44:08.610579 | orchestrator |  { 2026-03-09 00:44:08.610598 | orchestrator |  "lv_name": "osd-block-deb603ca-2db3-5399-8e8d-1e0d01641e0c", 2026-03-09 00:44:08.610618 | orchestrator |  "vg_name": "ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c" 2026-03-09 00:44:08.610638 | orchestrator |  } 2026-03-09 00:44:08.610656 | orchestrator |  ], 2026-03-09 00:44:08.610674 | orchestrator |  "pv": [ 2026-03-09 00:44:08.610686 | orchestrator |  { 2026-03-09 00:44:08.610700 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-09 00:44:08.610722 | orchestrator |  "vg_name": "ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c" 2026-03-09 00:44:08.610733 | orchestrator |  }, 2026-03-09 00:44:08.610744 | orchestrator |  { 2026-03-09 00:44:08.610755 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-09 00:44:08.610767 | orchestrator |  "vg_name": "ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab" 2026-03-09 00:44:08.610777 | orchestrator |  } 2026-03-09 00:44:08.610789 | orchestrator |  ] 2026-03-09 00:44:08.610804 | orchestrator |  } 2026-03-09 00:44:08.610822 | orchestrator | } 2026-03-09 00:44:08.610840 | orchestrator | 2026-03-09 00:44:08.610857 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-09 00:44:08.610874 | orchestrator | 2026-03-09 00:44:08.610892 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-09 00:44:08.610910 | orchestrator | Monday 09 March 2026 00:44:03 +0000 (0:00:00.498) 0:00:50.399 ********** 2026-03-09 00:44:08.610928 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-09 00:44:08.610947 | orchestrator | 2026-03-09 00:44:08.610965 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-09 00:44:08.610984 | orchestrator | Monday 09 March 2026 00:44:03 +0000 (0:00:00.253) 0:00:50.653 ********** 2026-03-09 00:44:08.610999 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:44:08.611010 | orchestrator | 2026-03-09 00:44:08.611021 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:08.611032 | orchestrator | Monday 09 March 2026 00:44:03 +0000 (0:00:00.239) 0:00:50.892 ********** 2026-03-09 00:44:08.611043 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-09 00:44:08.611054 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-09 00:44:08.611065 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-09 00:44:08.611076 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-09 00:44:08.611099 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-09 00:44:08.611110 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-09 00:44:08.611121 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-09 00:44:08.611132 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-09 00:44:08.611143 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-09 00:44:08.611158 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-09 00:44:08.611169 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-09 00:44:08.611180 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-09 00:44:08.611191 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-09 00:44:08.611202 | orchestrator | 2026-03-09 00:44:08.611213 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:08.611243 | orchestrator | Monday 09 March 2026 00:44:04 +0000 (0:00:00.388) 0:00:51.281 ********** 2026-03-09 00:44:08.611265 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:08.611277 | orchestrator | 2026-03-09 00:44:08.611288 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:08.611299 | orchestrator | Monday 09 March 2026 00:44:04 +0000 (0:00:00.220) 0:00:51.501 ********** 2026-03-09 00:44:08.611310 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:08.611321 | orchestrator | 2026-03-09 00:44:08.611332 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:08.611366 | orchestrator | Monday 09 March 2026 00:44:04 +0000 (0:00:00.188) 0:00:51.690 ********** 2026-03-09 00:44:08.611378 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:08.611389 | orchestrator | 2026-03-09 00:44:08.611400 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:08.611411 | orchestrator | Monday 09 March 2026 00:44:04 +0000 (0:00:00.206) 0:00:51.897 ********** 2026-03-09 00:44:08.611422 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:08.611433 | orchestrator | 2026-03-09 00:44:08.611444 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:08.611455 | orchestrator | Monday 09 March 2026 00:44:04 +0000 (0:00:00.209) 0:00:52.106 ********** 2026-03-09 00:44:08.611466 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:08.611476 | orchestrator | 2026-03-09 00:44:08.611487 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:08.611498 | orchestrator | Monday 09 March 2026 00:44:05 +0000 (0:00:00.623) 0:00:52.729 ********** 2026-03-09 00:44:08.611608 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:08.611630 | orchestrator | 2026-03-09 00:44:08.611651 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:08.611670 | orchestrator | Monday 09 March 2026 00:44:05 +0000 (0:00:00.196) 0:00:52.926 ********** 2026-03-09 00:44:08.611684 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:08.611695 | orchestrator | 2026-03-09 00:44:08.611706 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:08.611717 | orchestrator | Monday 09 March 2026 00:44:05 +0000 (0:00:00.207) 0:00:53.134 ********** 2026-03-09 00:44:08.611728 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:08.611739 | orchestrator | 2026-03-09 00:44:08.611750 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:08.611761 | orchestrator | Monday 09 March 2026 00:44:06 +0000 (0:00:00.193) 0:00:53.327 ********** 2026-03-09 00:44:08.611772 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a) 2026-03-09 00:44:08.611792 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a) 2026-03-09 00:44:08.611811 | orchestrator | 2026-03-09 00:44:08.611822 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:08.611833 | orchestrator | Monday 09 March 2026 00:44:06 +0000 (0:00:00.451) 0:00:53.779 ********** 2026-03-09 00:44:08.611844 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_34bdd215-cdf5-4909-8dd4-972bf1b79030) 2026-03-09 00:44:08.611855 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_34bdd215-cdf5-4909-8dd4-972bf1b79030) 2026-03-09 00:44:08.611866 | orchestrator | 2026-03-09 00:44:08.611877 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:08.611888 | orchestrator | Monday 09 March 2026 00:44:07 +0000 (0:00:00.416) 0:00:54.195 ********** 2026-03-09 00:44:08.611899 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_709b939c-9ac4-47b1-b5c3-cb1d8710b2fd) 2026-03-09 00:44:08.611910 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_709b939c-9ac4-47b1-b5c3-cb1d8710b2fd) 2026-03-09 00:44:08.611921 | orchestrator | 2026-03-09 00:44:08.611932 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:08.611942 | orchestrator | Monday 09 March 2026 00:44:07 +0000 (0:00:00.423) 0:00:54.619 ********** 2026-03-09 00:44:08.611953 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_069ee836-7f84-4f9f-9b43-0fd45db025c2) 2026-03-09 00:44:08.611964 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_069ee836-7f84-4f9f-9b43-0fd45db025c2) 2026-03-09 00:44:08.611975 | orchestrator | 2026-03-09 00:44:08.611986 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-09 00:44:08.611997 | orchestrator | Monday 09 March 2026 00:44:07 +0000 (0:00:00.447) 0:00:55.067 ********** 2026-03-09 00:44:08.612008 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-09 00:44:08.612019 | orchestrator | 2026-03-09 00:44:08.612030 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:08.612041 | orchestrator | Monday 09 March 2026 00:44:08 +0000 (0:00:00.345) 0:00:55.412 ********** 2026-03-09 00:44:08.612052 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-09 00:44:08.612063 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-09 00:44:08.612074 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-09 00:44:08.612085 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-09 00:44:08.612095 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-09 00:44:08.612106 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-09 00:44:08.612117 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-09 00:44:08.612128 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-09 00:44:08.612139 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-09 00:44:08.612150 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-09 00:44:08.612161 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-09 00:44:08.612181 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-09 00:44:17.517646 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-09 00:44:17.517750 | orchestrator | 2026-03-09 00:44:17.517768 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:17.517781 | orchestrator | Monday 09 March 2026 00:44:08 +0000 (0:00:00.416) 0:00:55.829 ********** 2026-03-09 00:44:17.517821 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:17.517835 | orchestrator | 2026-03-09 00:44:17.517847 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:17.517858 | orchestrator | Monday 09 March 2026 00:44:08 +0000 (0:00:00.209) 0:00:56.039 ********** 2026-03-09 00:44:17.517870 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:17.517882 | orchestrator | 2026-03-09 00:44:17.517893 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:17.517905 | orchestrator | Monday 09 March 2026 00:44:09 +0000 (0:00:00.707) 0:00:56.746 ********** 2026-03-09 00:44:17.517916 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:17.517928 | orchestrator | 2026-03-09 00:44:17.517938 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:17.517949 | orchestrator | Monday 09 March 2026 00:44:09 +0000 (0:00:00.212) 0:00:56.959 ********** 2026-03-09 00:44:17.518102 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:17.518120 | orchestrator | 2026-03-09 00:44:17.518133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:17.518147 | orchestrator | Monday 09 March 2026 00:44:10 +0000 (0:00:00.216) 0:00:57.176 ********** 2026-03-09 00:44:17.518159 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:17.518172 | orchestrator | 2026-03-09 00:44:17.518185 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:17.518200 | orchestrator | Monday 09 March 2026 00:44:10 +0000 (0:00:00.191) 0:00:57.368 ********** 2026-03-09 00:44:17.518213 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:17.518226 | orchestrator | 2026-03-09 00:44:17.518255 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:17.518268 | orchestrator | Monday 09 March 2026 00:44:10 +0000 (0:00:00.201) 0:00:57.569 ********** 2026-03-09 00:44:17.518281 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:17.518294 | orchestrator | 2026-03-09 00:44:17.518307 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:17.518318 | orchestrator | Monday 09 March 2026 00:44:10 +0000 (0:00:00.214) 0:00:57.783 ********** 2026-03-09 00:44:17.518331 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:17.518344 | orchestrator | 2026-03-09 00:44:17.518357 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:17.518370 | orchestrator | Monday 09 March 2026 00:44:10 +0000 (0:00:00.192) 0:00:57.976 ********** 2026-03-09 00:44:17.518383 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-09 00:44:17.518397 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-09 00:44:17.518411 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-09 00:44:17.518432 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-09 00:44:17.518455 | orchestrator | 2026-03-09 00:44:17.518468 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:17.518482 | orchestrator | Monday 09 March 2026 00:44:11 +0000 (0:00:00.650) 0:00:58.626 ********** 2026-03-09 00:44:17.518495 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:17.518532 | orchestrator | 2026-03-09 00:44:17.518544 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:17.518555 | orchestrator | Monday 09 March 2026 00:44:11 +0000 (0:00:00.228) 0:00:58.855 ********** 2026-03-09 00:44:17.518566 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:17.518577 | orchestrator | 2026-03-09 00:44:17.518588 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:17.518600 | orchestrator | Monday 09 March 2026 00:44:11 +0000 (0:00:00.198) 0:00:59.053 ********** 2026-03-09 00:44:17.518611 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:17.518622 | orchestrator | 2026-03-09 00:44:17.518632 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-09 00:44:17.518645 | orchestrator | Monday 09 March 2026 00:44:12 +0000 (0:00:00.192) 0:00:59.246 ********** 2026-03-09 00:44:17.518662 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:17.518669 | orchestrator | 2026-03-09 00:44:17.518676 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-09 00:44:17.518683 | orchestrator | Monday 09 March 2026 00:44:12 +0000 (0:00:00.201) 0:00:59.447 ********** 2026-03-09 00:44:17.518690 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:17.518696 | orchestrator | 2026-03-09 00:44:17.518708 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-09 00:44:17.518719 | orchestrator | Monday 09 March 2026 00:44:12 +0000 (0:00:00.325) 0:00:59.773 ********** 2026-03-09 00:44:17.518730 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5d8e344b-ecd1-5c90-b783-cb125ac7004a'}}) 2026-03-09 00:44:17.518742 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd6be2487-d224-518f-9009-30806e6fa587'}}) 2026-03-09 00:44:17.518754 | orchestrator | 2026-03-09 00:44:17.518765 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-09 00:44:17.518776 | orchestrator | Monday 09 March 2026 00:44:12 +0000 (0:00:00.205) 0:00:59.978 ********** 2026-03-09 00:44:17.518789 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5d8e344b-ecd1-5c90-b783-cb125ac7004a', 'data_vg': 'ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a'}) 2026-03-09 00:44:17.518802 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d6be2487-d224-518f-9009-30806e6fa587', 'data_vg': 'ceph-d6be2487-d224-518f-9009-30806e6fa587'}) 2026-03-09 00:44:17.518814 | orchestrator | 2026-03-09 00:44:17.518825 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-09 00:44:17.518855 | orchestrator | Monday 09 March 2026 00:44:14 +0000 (0:00:01.773) 0:01:01.751 ********** 2026-03-09 00:44:17.518867 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d8e344b-ecd1-5c90-b783-cb125ac7004a', 'data_vg': 'ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a'})  2026-03-09 00:44:17.518880 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6be2487-d224-518f-9009-30806e6fa587', 'data_vg': 'ceph-d6be2487-d224-518f-9009-30806e6fa587'})  2026-03-09 00:44:17.518891 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:17.518902 | orchestrator | 2026-03-09 00:44:17.518913 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-09 00:44:17.518924 | orchestrator | Monday 09 March 2026 00:44:14 +0000 (0:00:00.160) 0:01:01.912 ********** 2026-03-09 00:44:17.518935 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5d8e344b-ecd1-5c90-b783-cb125ac7004a', 'data_vg': 'ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a'}) 2026-03-09 00:44:17.518946 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d6be2487-d224-518f-9009-30806e6fa587', 'data_vg': 'ceph-d6be2487-d224-518f-9009-30806e6fa587'}) 2026-03-09 00:44:17.518957 | orchestrator | 2026-03-09 00:44:17.518969 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-09 00:44:17.518980 | orchestrator | Monday 09 March 2026 00:44:16 +0000 (0:00:01.264) 0:01:03.176 ********** 2026-03-09 00:44:17.518991 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d8e344b-ecd1-5c90-b783-cb125ac7004a', 'data_vg': 'ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a'})  2026-03-09 00:44:17.519002 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6be2487-d224-518f-9009-30806e6fa587', 'data_vg': 'ceph-d6be2487-d224-518f-9009-30806e6fa587'})  2026-03-09 00:44:17.519014 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:17.519024 | orchestrator | 2026-03-09 00:44:17.519036 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-09 00:44:17.519047 | orchestrator | Monday 09 March 2026 00:44:16 +0000 (0:00:00.164) 0:01:03.341 ********** 2026-03-09 00:44:17.519058 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:17.519069 | orchestrator | 2026-03-09 00:44:17.519080 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-09 00:44:17.519091 | orchestrator | Monday 09 March 2026 00:44:16 +0000 (0:00:00.135) 0:01:03.477 ********** 2026-03-09 00:44:17.519110 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d8e344b-ecd1-5c90-b783-cb125ac7004a', 'data_vg': 'ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a'})  2026-03-09 00:44:17.519121 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6be2487-d224-518f-9009-30806e6fa587', 'data_vg': 'ceph-d6be2487-d224-518f-9009-30806e6fa587'})  2026-03-09 00:44:17.519132 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:17.519143 | orchestrator | 2026-03-09 00:44:17.519155 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-09 00:44:17.519166 | orchestrator | Monday 09 March 2026 00:44:16 +0000 (0:00:00.153) 0:01:03.630 ********** 2026-03-09 00:44:17.519176 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:17.519188 | orchestrator | 2026-03-09 00:44:17.519200 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-09 00:44:17.519211 | orchestrator | Monday 09 March 2026 00:44:16 +0000 (0:00:00.133) 0:01:03.764 ********** 2026-03-09 00:44:17.519222 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d8e344b-ecd1-5c90-b783-cb125ac7004a', 'data_vg': 'ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a'})  2026-03-09 00:44:17.519232 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6be2487-d224-518f-9009-30806e6fa587', 'data_vg': 'ceph-d6be2487-d224-518f-9009-30806e6fa587'})  2026-03-09 00:44:17.519239 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:17.519246 | orchestrator | 2026-03-09 00:44:17.519253 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-09 00:44:17.519274 | orchestrator | Monday 09 March 2026 00:44:16 +0000 (0:00:00.156) 0:01:03.920 ********** 2026-03-09 00:44:17.519286 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:17.519297 | orchestrator | 2026-03-09 00:44:17.519309 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-09 00:44:17.519319 | orchestrator | Monday 09 March 2026 00:44:16 +0000 (0:00:00.134) 0:01:04.055 ********** 2026-03-09 00:44:17.519331 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d8e344b-ecd1-5c90-b783-cb125ac7004a', 'data_vg': 'ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a'})  2026-03-09 00:44:17.519343 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6be2487-d224-518f-9009-30806e6fa587', 'data_vg': 'ceph-d6be2487-d224-518f-9009-30806e6fa587'})  2026-03-09 00:44:17.519354 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:17.519365 | orchestrator | 2026-03-09 00:44:17.519377 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-09 00:44:17.519388 | orchestrator | Monday 09 March 2026 00:44:17 +0000 (0:00:00.150) 0:01:04.205 ********** 2026-03-09 00:44:17.519400 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:44:17.519411 | orchestrator | 2026-03-09 00:44:17.519421 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-09 00:44:17.519432 | orchestrator | Monday 09 March 2026 00:44:17 +0000 (0:00:00.359) 0:01:04.565 ********** 2026-03-09 00:44:17.519446 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d8e344b-ecd1-5c90-b783-cb125ac7004a', 'data_vg': 'ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a'})  2026-03-09 00:44:23.671489 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6be2487-d224-518f-9009-30806e6fa587', 'data_vg': 'ceph-d6be2487-d224-518f-9009-30806e6fa587'})  2026-03-09 00:44:23.671621 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:23.671637 | orchestrator | 2026-03-09 00:44:23.671648 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-09 00:44:23.671659 | orchestrator | Monday 09 March 2026 00:44:17 +0000 (0:00:00.255) 0:01:04.820 ********** 2026-03-09 00:44:23.671669 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d8e344b-ecd1-5c90-b783-cb125ac7004a', 'data_vg': 'ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a'})  2026-03-09 00:44:23.671678 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6be2487-d224-518f-9009-30806e6fa587', 'data_vg': 'ceph-d6be2487-d224-518f-9009-30806e6fa587'})  2026-03-09 00:44:23.671705 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:23.671714 | orchestrator | 2026-03-09 00:44:23.671723 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-09 00:44:23.671733 | orchestrator | Monday 09 March 2026 00:44:17 +0000 (0:00:00.166) 0:01:04.987 ********** 2026-03-09 00:44:23.671742 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d8e344b-ecd1-5c90-b783-cb125ac7004a', 'data_vg': 'ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a'})  2026-03-09 00:44:23.671751 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6be2487-d224-518f-9009-30806e6fa587', 'data_vg': 'ceph-d6be2487-d224-518f-9009-30806e6fa587'})  2026-03-09 00:44:23.671760 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:23.671768 | orchestrator | 2026-03-09 00:44:23.671777 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-09 00:44:23.671799 | orchestrator | Monday 09 March 2026 00:44:18 +0000 (0:00:00.163) 0:01:05.151 ********** 2026-03-09 00:44:23.671808 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:23.671817 | orchestrator | 2026-03-09 00:44:23.671827 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-09 00:44:23.671842 | orchestrator | Monday 09 March 2026 00:44:18 +0000 (0:00:00.162) 0:01:05.313 ********** 2026-03-09 00:44:23.671857 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:23.671866 | orchestrator | 2026-03-09 00:44:23.671874 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-09 00:44:23.671883 | orchestrator | Monday 09 March 2026 00:44:18 +0000 (0:00:00.144) 0:01:05.458 ********** 2026-03-09 00:44:23.671892 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:23.671901 | orchestrator | 2026-03-09 00:44:23.671910 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-09 00:44:23.671918 | orchestrator | Monday 09 March 2026 00:44:18 +0000 (0:00:00.131) 0:01:05.589 ********** 2026-03-09 00:44:23.671927 | orchestrator | ok: [testbed-node-5] => { 2026-03-09 00:44:23.671937 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-09 00:44:23.671945 | orchestrator | } 2026-03-09 00:44:23.671955 | orchestrator | 2026-03-09 00:44:23.671964 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-09 00:44:23.671973 | orchestrator | Monday 09 March 2026 00:44:18 +0000 (0:00:00.148) 0:01:05.738 ********** 2026-03-09 00:44:23.671981 | orchestrator | ok: [testbed-node-5] => { 2026-03-09 00:44:23.671990 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-09 00:44:23.671999 | orchestrator | } 2026-03-09 00:44:23.672008 | orchestrator | 2026-03-09 00:44:23.672016 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-09 00:44:23.672025 | orchestrator | Monday 09 March 2026 00:44:18 +0000 (0:00:00.140) 0:01:05.879 ********** 2026-03-09 00:44:23.672034 | orchestrator | ok: [testbed-node-5] => { 2026-03-09 00:44:23.672042 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-09 00:44:23.672051 | orchestrator | } 2026-03-09 00:44:23.672060 | orchestrator | 2026-03-09 00:44:23.672069 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-09 00:44:23.672077 | orchestrator | Monday 09 March 2026 00:44:18 +0000 (0:00:00.139) 0:01:06.018 ********** 2026-03-09 00:44:23.672086 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:44:23.672095 | orchestrator | 2026-03-09 00:44:23.672104 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-09 00:44:23.672113 | orchestrator | Monday 09 March 2026 00:44:19 +0000 (0:00:00.516) 0:01:06.535 ********** 2026-03-09 00:44:23.672121 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:44:23.672130 | orchestrator | 2026-03-09 00:44:23.672139 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-09 00:44:23.672148 | orchestrator | Monday 09 March 2026 00:44:19 +0000 (0:00:00.544) 0:01:07.079 ********** 2026-03-09 00:44:23.672156 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:44:23.672171 | orchestrator | 2026-03-09 00:44:23.672180 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-09 00:44:23.672189 | orchestrator | Monday 09 March 2026 00:44:20 +0000 (0:00:00.720) 0:01:07.799 ********** 2026-03-09 00:44:23.672198 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:44:23.672206 | orchestrator | 2026-03-09 00:44:23.672215 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-09 00:44:23.672223 | orchestrator | Monday 09 March 2026 00:44:20 +0000 (0:00:00.140) 0:01:07.940 ********** 2026-03-09 00:44:23.672232 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:23.672241 | orchestrator | 2026-03-09 00:44:23.672249 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-09 00:44:23.672258 | orchestrator | Monday 09 March 2026 00:44:20 +0000 (0:00:00.101) 0:01:08.042 ********** 2026-03-09 00:44:23.672267 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:23.672275 | orchestrator | 2026-03-09 00:44:23.672284 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-09 00:44:23.672293 | orchestrator | Monday 09 March 2026 00:44:21 +0000 (0:00:00.119) 0:01:08.161 ********** 2026-03-09 00:44:23.672301 | orchestrator | ok: [testbed-node-5] => { 2026-03-09 00:44:23.672310 | orchestrator |  "vgs_report": { 2026-03-09 00:44:23.672319 | orchestrator |  "vg": [] 2026-03-09 00:44:23.672343 | orchestrator |  } 2026-03-09 00:44:23.672353 | orchestrator | } 2026-03-09 00:44:23.672362 | orchestrator | 2026-03-09 00:44:23.672371 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-09 00:44:23.672380 | orchestrator | Monday 09 March 2026 00:44:21 +0000 (0:00:00.146) 0:01:08.308 ********** 2026-03-09 00:44:23.672388 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:23.672397 | orchestrator | 2026-03-09 00:44:23.672406 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-09 00:44:23.672415 | orchestrator | Monday 09 March 2026 00:44:21 +0000 (0:00:00.137) 0:01:08.445 ********** 2026-03-09 00:44:23.672423 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:23.672432 | orchestrator | 2026-03-09 00:44:23.672441 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-09 00:44:23.672450 | orchestrator | Monday 09 March 2026 00:44:21 +0000 (0:00:00.134) 0:01:08.580 ********** 2026-03-09 00:44:23.672458 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:23.672467 | orchestrator | 2026-03-09 00:44:23.672476 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-09 00:44:23.672485 | orchestrator | Monday 09 March 2026 00:44:21 +0000 (0:00:00.110) 0:01:08.690 ********** 2026-03-09 00:44:23.672493 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:23.672527 | orchestrator | 2026-03-09 00:44:23.672536 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-09 00:44:23.672545 | orchestrator | Monday 09 March 2026 00:44:21 +0000 (0:00:00.128) 0:01:08.819 ********** 2026-03-09 00:44:23.672553 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:23.672562 | orchestrator | 2026-03-09 00:44:23.672570 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-09 00:44:23.672579 | orchestrator | Monday 09 March 2026 00:44:21 +0000 (0:00:00.136) 0:01:08.956 ********** 2026-03-09 00:44:23.672587 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:23.672596 | orchestrator | 2026-03-09 00:44:23.672605 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-09 00:44:23.672618 | orchestrator | Monday 09 March 2026 00:44:21 +0000 (0:00:00.130) 0:01:09.087 ********** 2026-03-09 00:44:23.672627 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:23.672636 | orchestrator | 2026-03-09 00:44:23.672644 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-09 00:44:23.672653 | orchestrator | Monday 09 March 2026 00:44:22 +0000 (0:00:00.149) 0:01:09.236 ********** 2026-03-09 00:44:23.672662 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:23.672670 | orchestrator | 2026-03-09 00:44:23.672679 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-09 00:44:23.672697 | orchestrator | Monday 09 March 2026 00:44:22 +0000 (0:00:00.324) 0:01:09.561 ********** 2026-03-09 00:44:23.672706 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:23.672715 | orchestrator | 2026-03-09 00:44:23.672723 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-09 00:44:23.672732 | orchestrator | Monday 09 March 2026 00:44:22 +0000 (0:00:00.133) 0:01:09.695 ********** 2026-03-09 00:44:23.672741 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:23.672749 | orchestrator | 2026-03-09 00:44:23.672758 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-09 00:44:23.672767 | orchestrator | Monday 09 March 2026 00:44:22 +0000 (0:00:00.150) 0:01:09.846 ********** 2026-03-09 00:44:23.672775 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:23.672784 | orchestrator | 2026-03-09 00:44:23.672793 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-09 00:44:23.672801 | orchestrator | Monday 09 March 2026 00:44:22 +0000 (0:00:00.142) 0:01:09.988 ********** 2026-03-09 00:44:23.672810 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:23.672818 | orchestrator | 2026-03-09 00:44:23.672827 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-09 00:44:23.672836 | orchestrator | Monday 09 March 2026 00:44:22 +0000 (0:00:00.146) 0:01:10.135 ********** 2026-03-09 00:44:23.672844 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:23.672853 | orchestrator | 2026-03-09 00:44:23.672861 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-09 00:44:23.672870 | orchestrator | Monday 09 March 2026 00:44:23 +0000 (0:00:00.138) 0:01:10.273 ********** 2026-03-09 00:44:23.672879 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:23.672887 | orchestrator | 2026-03-09 00:44:23.672896 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-09 00:44:23.672904 | orchestrator | Monday 09 March 2026 00:44:23 +0000 (0:00:00.143) 0:01:10.417 ********** 2026-03-09 00:44:23.672913 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d8e344b-ecd1-5c90-b783-cb125ac7004a', 'data_vg': 'ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a'})  2026-03-09 00:44:23.672922 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6be2487-d224-518f-9009-30806e6fa587', 'data_vg': 'ceph-d6be2487-d224-518f-9009-30806e6fa587'})  2026-03-09 00:44:23.672931 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:23.672939 | orchestrator | 2026-03-09 00:44:23.672948 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-09 00:44:23.672957 | orchestrator | Monday 09 March 2026 00:44:23 +0000 (0:00:00.179) 0:01:10.596 ********** 2026-03-09 00:44:23.672965 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d8e344b-ecd1-5c90-b783-cb125ac7004a', 'data_vg': 'ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a'})  2026-03-09 00:44:23.672974 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6be2487-d224-518f-9009-30806e6fa587', 'data_vg': 'ceph-d6be2487-d224-518f-9009-30806e6fa587'})  2026-03-09 00:44:23.672983 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:23.672991 | orchestrator | 2026-03-09 00:44:23.673000 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-09 00:44:23.673009 | orchestrator | Monday 09 March 2026 00:44:23 +0000 (0:00:00.156) 0:01:10.752 ********** 2026-03-09 00:44:23.673024 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d8e344b-ecd1-5c90-b783-cb125ac7004a', 'data_vg': 'ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a'})  2026-03-09 00:44:26.737751 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6be2487-d224-518f-9009-30806e6fa587', 'data_vg': 'ceph-d6be2487-d224-518f-9009-30806e6fa587'})  2026-03-09 00:44:26.737840 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:26.737848 | orchestrator | 2026-03-09 00:44:26.737853 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-09 00:44:26.737859 | orchestrator | Monday 09 March 2026 00:44:23 +0000 (0:00:00.137) 0:01:10.890 ********** 2026-03-09 00:44:26.737880 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d8e344b-ecd1-5c90-b783-cb125ac7004a', 'data_vg': 'ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a'})  2026-03-09 00:44:26.737906 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6be2487-d224-518f-9009-30806e6fa587', 'data_vg': 'ceph-d6be2487-d224-518f-9009-30806e6fa587'})  2026-03-09 00:44:26.737910 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:26.737914 | orchestrator | 2026-03-09 00:44:26.737918 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-09 00:44:26.737922 | orchestrator | Monday 09 March 2026 00:44:23 +0000 (0:00:00.142) 0:01:11.032 ********** 2026-03-09 00:44:26.737926 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d8e344b-ecd1-5c90-b783-cb125ac7004a', 'data_vg': 'ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a'})  2026-03-09 00:44:26.737940 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6be2487-d224-518f-9009-30806e6fa587', 'data_vg': 'ceph-d6be2487-d224-518f-9009-30806e6fa587'})  2026-03-09 00:44:26.737964 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:26.737969 | orchestrator | 2026-03-09 00:44:26.737972 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-09 00:44:26.737976 | orchestrator | Monday 09 March 2026 00:44:24 +0000 (0:00:00.165) 0:01:11.198 ********** 2026-03-09 00:44:26.737980 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d8e344b-ecd1-5c90-b783-cb125ac7004a', 'data_vg': 'ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a'})  2026-03-09 00:44:26.737984 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6be2487-d224-518f-9009-30806e6fa587', 'data_vg': 'ceph-d6be2487-d224-518f-9009-30806e6fa587'})  2026-03-09 00:44:26.737988 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:26.737992 | orchestrator | 2026-03-09 00:44:26.737996 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-09 00:44:26.738000 | orchestrator | Monday 09 March 2026 00:44:24 +0000 (0:00:00.380) 0:01:11.578 ********** 2026-03-09 00:44:26.738003 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d8e344b-ecd1-5c90-b783-cb125ac7004a', 'data_vg': 'ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a'})  2026-03-09 00:44:26.738071 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6be2487-d224-518f-9009-30806e6fa587', 'data_vg': 'ceph-d6be2487-d224-518f-9009-30806e6fa587'})  2026-03-09 00:44:26.738076 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:26.738080 | orchestrator | 2026-03-09 00:44:26.738117 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-09 00:44:26.738121 | orchestrator | Monday 09 March 2026 00:44:24 +0000 (0:00:00.148) 0:01:11.727 ********** 2026-03-09 00:44:26.738125 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d8e344b-ecd1-5c90-b783-cb125ac7004a', 'data_vg': 'ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a'})  2026-03-09 00:44:26.738129 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6be2487-d224-518f-9009-30806e6fa587', 'data_vg': 'ceph-d6be2487-d224-518f-9009-30806e6fa587'})  2026-03-09 00:44:26.738132 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:26.738136 | orchestrator | 2026-03-09 00:44:26.738140 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-09 00:44:26.738184 | orchestrator | Monday 09 March 2026 00:44:24 +0000 (0:00:00.146) 0:01:11.874 ********** 2026-03-09 00:44:26.738188 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:44:26.738193 | orchestrator | 2026-03-09 00:44:26.738197 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-09 00:44:26.738205 | orchestrator | Monday 09 March 2026 00:44:25 +0000 (0:00:00.512) 0:01:12.387 ********** 2026-03-09 00:44:26.738209 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:44:26.738213 | orchestrator | 2026-03-09 00:44:26.738217 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-09 00:44:26.738226 | orchestrator | Monday 09 March 2026 00:44:25 +0000 (0:00:00.532) 0:01:12.919 ********** 2026-03-09 00:44:26.738230 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:44:26.738234 | orchestrator | 2026-03-09 00:44:26.738237 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-09 00:44:26.738241 | orchestrator | Monday 09 March 2026 00:44:25 +0000 (0:00:00.140) 0:01:13.060 ********** 2026-03-09 00:44:26.738245 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-5d8e344b-ecd1-5c90-b783-cb125ac7004a', 'vg_name': 'ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a'}) 2026-03-09 00:44:26.738250 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-d6be2487-d224-518f-9009-30806e6fa587', 'vg_name': 'ceph-d6be2487-d224-518f-9009-30806e6fa587'}) 2026-03-09 00:44:26.738254 | orchestrator | 2026-03-09 00:44:26.738258 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-09 00:44:26.738262 | orchestrator | Monday 09 March 2026 00:44:26 +0000 (0:00:00.190) 0:01:13.251 ********** 2026-03-09 00:44:26.738278 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d8e344b-ecd1-5c90-b783-cb125ac7004a', 'data_vg': 'ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a'})  2026-03-09 00:44:26.738282 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6be2487-d224-518f-9009-30806e6fa587', 'data_vg': 'ceph-d6be2487-d224-518f-9009-30806e6fa587'})  2026-03-09 00:44:26.738286 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:26.738290 | orchestrator | 2026-03-09 00:44:26.738294 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-09 00:44:26.738297 | orchestrator | Monday 09 March 2026 00:44:26 +0000 (0:00:00.155) 0:01:13.406 ********** 2026-03-09 00:44:26.738301 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d8e344b-ecd1-5c90-b783-cb125ac7004a', 'data_vg': 'ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a'})  2026-03-09 00:44:26.738305 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6be2487-d224-518f-9009-30806e6fa587', 'data_vg': 'ceph-d6be2487-d224-518f-9009-30806e6fa587'})  2026-03-09 00:44:26.738310 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:26.738314 | orchestrator | 2026-03-09 00:44:26.738319 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-09 00:44:26.738341 | orchestrator | Monday 09 March 2026 00:44:26 +0000 (0:00:00.155) 0:01:13.562 ********** 2026-03-09 00:44:26.738346 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5d8e344b-ecd1-5c90-b783-cb125ac7004a', 'data_vg': 'ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a'})  2026-03-09 00:44:26.738350 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6be2487-d224-518f-9009-30806e6fa587', 'data_vg': 'ceph-d6be2487-d224-518f-9009-30806e6fa587'})  2026-03-09 00:44:26.738355 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:26.738359 | orchestrator | 2026-03-09 00:44:26.738364 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-09 00:44:26.738368 | orchestrator | Monday 09 March 2026 00:44:26 +0000 (0:00:00.153) 0:01:13.716 ********** 2026-03-09 00:44:26.738372 | orchestrator | ok: [testbed-node-5] => { 2026-03-09 00:44:26.738377 | orchestrator |  "lvm_report": { 2026-03-09 00:44:26.738381 | orchestrator |  "lv": [ 2026-03-09 00:44:26.738406 | orchestrator |  { 2026-03-09 00:44:26.738438 | orchestrator |  "lv_name": "osd-block-5d8e344b-ecd1-5c90-b783-cb125ac7004a", 2026-03-09 00:44:26.738444 | orchestrator |  "vg_name": "ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a" 2026-03-09 00:44:26.738477 | orchestrator |  }, 2026-03-09 00:44:26.738482 | orchestrator |  { 2026-03-09 00:44:26.738487 | orchestrator |  "lv_name": "osd-block-d6be2487-d224-518f-9009-30806e6fa587", 2026-03-09 00:44:26.738491 | orchestrator |  "vg_name": "ceph-d6be2487-d224-518f-9009-30806e6fa587" 2026-03-09 00:44:26.738495 | orchestrator |  } 2026-03-09 00:44:26.738514 | orchestrator |  ], 2026-03-09 00:44:26.738519 | orchestrator |  "pv": [ 2026-03-09 00:44:26.738542 | orchestrator |  { 2026-03-09 00:44:26.738559 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-09 00:44:26.738564 | orchestrator |  "vg_name": "ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a" 2026-03-09 00:44:26.738569 | orchestrator |  }, 2026-03-09 00:44:26.738573 | orchestrator |  { 2026-03-09 00:44:26.738578 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-09 00:44:26.738583 | orchestrator |  "vg_name": "ceph-d6be2487-d224-518f-9009-30806e6fa587" 2026-03-09 00:44:26.738589 | orchestrator |  } 2026-03-09 00:44:26.738616 | orchestrator |  ] 2026-03-09 00:44:26.738623 | orchestrator |  } 2026-03-09 00:44:26.738629 | orchestrator | } 2026-03-09 00:44:26.738636 | orchestrator | 2026-03-09 00:44:26.738642 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:44:26.738648 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-09 00:44:26.738694 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-09 00:44:26.738701 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-09 00:44:26.738708 | orchestrator | 2026-03-09 00:44:26.738714 | orchestrator | 2026-03-09 00:44:26.738720 | orchestrator | 2026-03-09 00:44:26.738726 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:44:26.738733 | orchestrator | Monday 09 March 2026 00:44:26 +0000 (0:00:00.140) 0:01:13.857 ********** 2026-03-09 00:44:26.738739 | orchestrator | =============================================================================== 2026-03-09 00:44:26.738771 | orchestrator | Create block VGs -------------------------------------------------------- 5.74s 2026-03-09 00:44:26.739015 | orchestrator | Create block LVs -------------------------------------------------------- 4.28s 2026-03-09 00:44:26.739023 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.81s 2026-03-09 00:44:26.739029 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.75s 2026-03-09 00:44:26.739045 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.72s 2026-03-09 00:44:26.739051 | orchestrator | Add known partitions to the list of available block devices ------------- 1.70s 2026-03-09 00:44:26.739058 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.63s 2026-03-09 00:44:26.739064 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.63s 2026-03-09 00:44:26.739095 | orchestrator | Add known links to the list of available block devices ------------------ 1.36s 2026-03-09 00:44:27.205924 | orchestrator | Print LVM report data --------------------------------------------------- 0.95s 2026-03-09 00:44:27.205996 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2026-03-09 00:44:27.206002 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2026-03-09 00:44:27.206007 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.79s 2026-03-09 00:44:27.206012 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.75s 2026-03-09 00:44:27.206050 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.72s 2026-03-09 00:44:27.206054 | orchestrator | Get initial list of available block devices ----------------------------- 0.72s 2026-03-09 00:44:27.206058 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.71s 2026-03-09 00:44:27.206063 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.71s 2026-03-09 00:44:27.206067 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2026-03-09 00:44:27.206071 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2026-03-09 00:44:39.857375 | orchestrator | 2026-03-09 00:44:39 | INFO  | Prepare task for execution of facts. 2026-03-09 00:44:39.922768 | orchestrator | 2026-03-09 00:44:39 | INFO  | Task df2c5a48-a3ac-4d8e-93eb-696a054cf9a1 (facts) was prepared for execution. 2026-03-09 00:44:39.922867 | orchestrator | 2026-03-09 00:44:39 | INFO  | It takes a moment until task df2c5a48-a3ac-4d8e-93eb-696a054cf9a1 (facts) has been started and output is visible here. 2026-03-09 00:44:52.760917 | orchestrator | 2026-03-09 00:44:52.761047 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-09 00:44:52.761075 | orchestrator | 2026-03-09 00:44:52.761096 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-09 00:44:52.761115 | orchestrator | Monday 09 March 2026 00:44:44 +0000 (0:00:00.280) 0:00:00.280 ********** 2026-03-09 00:44:52.761134 | orchestrator | ok: [testbed-manager] 2026-03-09 00:44:52.761155 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:44:52.761173 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:44:52.761193 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:44:52.761211 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:44:52.761229 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:44:52.761248 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:44:52.761266 | orchestrator | 2026-03-09 00:44:52.761285 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-09 00:44:52.761303 | orchestrator | Monday 09 March 2026 00:44:45 +0000 (0:00:01.151) 0:00:01.431 ********** 2026-03-09 00:44:52.761322 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:44:52.761341 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:44:52.761359 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:44:52.761377 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:44:52.761396 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:52.761414 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:52.761433 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:52.761452 | orchestrator | 2026-03-09 00:44:52.761471 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-09 00:44:52.761491 | orchestrator | 2026-03-09 00:44:52.761561 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-09 00:44:52.761581 | orchestrator | Monday 09 March 2026 00:44:46 +0000 (0:00:01.214) 0:00:02.646 ********** 2026-03-09 00:44:52.761601 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:44:52.761620 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:44:52.761639 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:44:52.761658 | orchestrator | ok: [testbed-manager] 2026-03-09 00:44:52.761677 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:44:52.761695 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:44:52.761714 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:44:52.761734 | orchestrator | 2026-03-09 00:44:52.761753 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-09 00:44:52.761772 | orchestrator | 2026-03-09 00:44:52.761791 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-09 00:44:52.761809 | orchestrator | Monday 09 March 2026 00:44:51 +0000 (0:00:05.210) 0:00:07.856 ********** 2026-03-09 00:44:52.761828 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:44:52.761846 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:44:52.761864 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:44:52.761882 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:44:52.761901 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:44:52.761920 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:44:52.761940 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:44:52.761958 | orchestrator | 2026-03-09 00:44:52.761978 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:44:52.761996 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:44:52.762077 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:44:52.762137 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:44:52.762160 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:44:52.762180 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:44:52.762202 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:44:52.762219 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:44:52.762238 | orchestrator | 2026-03-09 00:44:52.762257 | orchestrator | 2026-03-09 00:44:52.762275 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:44:52.762292 | orchestrator | Monday 09 March 2026 00:44:52 +0000 (0:00:00.528) 0:00:08.384 ********** 2026-03-09 00:44:52.762311 | orchestrator | =============================================================================== 2026-03-09 00:44:52.762329 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.21s 2026-03-09 00:44:52.762348 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.21s 2026-03-09 00:44:52.762368 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.15s 2026-03-09 00:44:52.762386 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2026-03-09 00:45:05.178895 | orchestrator | 2026-03-09 00:45:05 | INFO  | Prepare task for execution of frr. 2026-03-09 00:45:05.263256 | orchestrator | 2026-03-09 00:45:05 | INFO  | Task 7d08538c-2b8e-42ad-a757-ba6c5a2a53f5 (frr) was prepared for execution. 2026-03-09 00:45:05.263336 | orchestrator | 2026-03-09 00:45:05 | INFO  | It takes a moment until task 7d08538c-2b8e-42ad-a757-ba6c5a2a53f5 (frr) has been started and output is visible here. 2026-03-09 00:45:32.417638 | orchestrator | 2026-03-09 00:45:32.417760 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-09 00:45:32.417776 | orchestrator | 2026-03-09 00:45:32.417788 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-09 00:45:32.417800 | orchestrator | Monday 09 March 2026 00:45:09 +0000 (0:00:00.243) 0:00:00.243 ********** 2026-03-09 00:45:32.417812 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-09 00:45:32.417825 | orchestrator | 2026-03-09 00:45:32.417836 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-09 00:45:32.417847 | orchestrator | Monday 09 March 2026 00:45:09 +0000 (0:00:00.239) 0:00:00.482 ********** 2026-03-09 00:45:32.417858 | orchestrator | changed: [testbed-manager] 2026-03-09 00:45:32.417870 | orchestrator | 2026-03-09 00:45:32.417881 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-09 00:45:32.417892 | orchestrator | Monday 09 March 2026 00:45:10 +0000 (0:00:01.110) 0:00:01.593 ********** 2026-03-09 00:45:32.417903 | orchestrator | changed: [testbed-manager] 2026-03-09 00:45:32.417914 | orchestrator | 2026-03-09 00:45:32.417925 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-09 00:45:32.417936 | orchestrator | Monday 09 March 2026 00:45:21 +0000 (0:00:10.725) 0:00:12.318 ********** 2026-03-09 00:45:32.417947 | orchestrator | ok: [testbed-manager] 2026-03-09 00:45:32.417959 | orchestrator | 2026-03-09 00:45:32.417970 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-09 00:45:32.417981 | orchestrator | Monday 09 March 2026 00:45:22 +0000 (0:00:01.018) 0:00:13.336 ********** 2026-03-09 00:45:32.417998 | orchestrator | changed: [testbed-manager] 2026-03-09 00:45:32.418121 | orchestrator | 2026-03-09 00:45:32.418139 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-09 00:45:32.418151 | orchestrator | Monday 09 March 2026 00:45:23 +0000 (0:00:00.938) 0:00:14.275 ********** 2026-03-09 00:45:32.418165 | orchestrator | ok: [testbed-manager] 2026-03-09 00:45:32.418178 | orchestrator | 2026-03-09 00:45:32.418191 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-03-09 00:45:32.418204 | orchestrator | Monday 09 March 2026 00:45:25 +0000 (0:00:01.322) 0:00:15.598 ********** 2026-03-09 00:45:32.418216 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:45:32.418229 | orchestrator | 2026-03-09 00:45:32.418241 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-03-09 00:45:32.418254 | orchestrator | Monday 09 March 2026 00:45:25 +0000 (0:00:00.154) 0:00:15.753 ********** 2026-03-09 00:45:32.418266 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:45:32.418279 | orchestrator | 2026-03-09 00:45:32.418291 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-03-09 00:45:32.418304 | orchestrator | Monday 09 March 2026 00:45:25 +0000 (0:00:00.155) 0:00:15.908 ********** 2026-03-09 00:45:32.418316 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:45:32.418328 | orchestrator | 2026-03-09 00:45:32.418340 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-09 00:45:32.418354 | orchestrator | Monday 09 March 2026 00:45:25 +0000 (0:00:00.182) 0:00:16.091 ********** 2026-03-09 00:45:32.418366 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:45:32.418379 | orchestrator | 2026-03-09 00:45:32.418391 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-09 00:45:32.418404 | orchestrator | Monday 09 March 2026 00:45:25 +0000 (0:00:00.139) 0:00:16.230 ********** 2026-03-09 00:45:32.418417 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:45:32.418429 | orchestrator | 2026-03-09 00:45:32.418441 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-09 00:45:32.418454 | orchestrator | Monday 09 March 2026 00:45:25 +0000 (0:00:00.159) 0:00:16.390 ********** 2026-03-09 00:45:32.418467 | orchestrator | changed: [testbed-manager] 2026-03-09 00:45:32.418479 | orchestrator | 2026-03-09 00:45:32.418521 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-09 00:45:32.418541 | orchestrator | Monday 09 March 2026 00:45:27 +0000 (0:00:01.298) 0:00:17.689 ********** 2026-03-09 00:45:32.418558 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-09 00:45:32.418576 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-09 00:45:32.418596 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-09 00:45:32.418614 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-09 00:45:32.418631 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-09 00:45:32.418642 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-09 00:45:32.418653 | orchestrator | 2026-03-09 00:45:32.418664 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-09 00:45:32.418675 | orchestrator | Monday 09 March 2026 00:45:29 +0000 (0:00:02.320) 0:00:20.009 ********** 2026-03-09 00:45:32.418686 | orchestrator | ok: [testbed-manager] 2026-03-09 00:45:32.418697 | orchestrator | 2026-03-09 00:45:32.418707 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-09 00:45:32.418718 | orchestrator | Monday 09 March 2026 00:45:30 +0000 (0:00:01.331) 0:00:21.341 ********** 2026-03-09 00:45:32.418729 | orchestrator | changed: [testbed-manager] 2026-03-09 00:45:32.418739 | orchestrator | 2026-03-09 00:45:32.418750 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:45:32.418772 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-09 00:45:32.418783 | orchestrator | 2026-03-09 00:45:32.418793 | orchestrator | 2026-03-09 00:45:32.418832 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:45:32.418845 | orchestrator | Monday 09 March 2026 00:45:32 +0000 (0:00:01.355) 0:00:22.697 ********** 2026-03-09 00:45:32.418856 | orchestrator | =============================================================================== 2026-03-09 00:45:32.418866 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.73s 2026-03-09 00:45:32.418877 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.32s 2026-03-09 00:45:32.418888 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.36s 2026-03-09 00:45:32.418899 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.33s 2026-03-09 00:45:32.418910 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.32s 2026-03-09 00:45:32.418920 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.30s 2026-03-09 00:45:32.418931 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.11s 2026-03-09 00:45:32.418942 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.02s 2026-03-09 00:45:32.418952 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.94s 2026-03-09 00:45:32.418963 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.24s 2026-03-09 00:45:32.418974 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.18s 2026-03-09 00:45:32.418984 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.16s 2026-03-09 00:45:32.418995 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.16s 2026-03-09 00:45:32.419006 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.15s 2026-03-09 00:45:32.419017 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-03-09 00:45:32.920364 | orchestrator | 2026-03-09 00:45:32.923031 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Mar 9 00:45:32 UTC 2026 2026-03-09 00:45:32.923083 | orchestrator | 2026-03-09 00:45:34.875917 | orchestrator | 2026-03-09 00:45:34 | INFO  | Collection nutshell is prepared for execution 2026-03-09 00:45:34.876013 | orchestrator | 2026-03-09 00:45:34 | INFO  | A [0] - dotfiles 2026-03-09 00:45:44.924298 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [0] - homer 2026-03-09 00:45:44.924406 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [0] - netdata 2026-03-09 00:45:44.924427 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [0] - openstackclient 2026-03-09 00:45:44.924446 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [0] - phpmyadmin 2026-03-09 00:45:44.924462 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [0] - common 2026-03-09 00:45:44.924476 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [1] -- loadbalancer 2026-03-09 00:45:44.924486 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [2] --- opensearch 2026-03-09 00:45:44.924561 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [2] --- mariadb-ng 2026-03-09 00:45:44.924571 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [3] ---- horizon 2026-03-09 00:45:44.924581 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [3] ---- keystone 2026-03-09 00:45:44.924591 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [4] ----- neutron 2026-03-09 00:45:44.924601 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [5] ------ wait-for-nova 2026-03-09 00:45:44.924612 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [6] ------- octavia 2026-03-09 00:45:44.925201 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [4] ----- barbican 2026-03-09 00:45:44.925250 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [4] ----- designate 2026-03-09 00:45:44.925261 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [4] ----- ironic 2026-03-09 00:45:44.925271 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [4] ----- placement 2026-03-09 00:45:44.925281 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [4] ----- magnum 2026-03-09 00:45:44.925528 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [1] -- openvswitch 2026-03-09 00:45:44.925614 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [2] --- ovn 2026-03-09 00:45:44.925705 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [1] -- memcached 2026-03-09 00:45:44.925722 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [1] -- redis 2026-03-09 00:45:44.925732 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [1] -- rabbitmq-ng 2026-03-09 00:45:44.925842 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [0] - kubernetes 2026-03-09 00:45:44.928109 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [1] -- kubeconfig 2026-03-09 00:45:44.928212 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [1] -- copy-kubeconfig 2026-03-09 00:45:44.928230 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [0] - ceph 2026-03-09 00:45:44.929415 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [1] -- ceph-pools 2026-03-09 00:45:44.929449 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [2] --- copy-ceph-keys 2026-03-09 00:45:44.929575 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [3] ---- cephclient 2026-03-09 00:45:44.929656 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-09 00:45:44.929689 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [4] ----- wait-for-keystone 2026-03-09 00:45:44.929701 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-09 00:45:44.929720 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [5] ------ glance 2026-03-09 00:45:44.929732 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [5] ------ cinder 2026-03-09 00:45:44.929743 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [5] ------ nova 2026-03-09 00:45:44.929948 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [4] ----- prometheus 2026-03-09 00:45:44.929969 | orchestrator | 2026-03-09 00:45:44 | INFO  | A [5] ------ grafana 2026-03-09 00:45:45.128046 | orchestrator | 2026-03-09 00:45:45 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-09 00:45:45.128129 | orchestrator | 2026-03-09 00:45:45 | INFO  | Tasks are running in the background 2026-03-09 00:45:48.407685 | orchestrator | 2026-03-09 00:45:48 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-09 00:45:50.519761 | orchestrator | 2026-03-09 00:45:50 | INFO  | Task a6a5cc27-4aa5-425d-8cca-d41f1e7a6ea4 is in state STARTED 2026-03-09 00:45:50.519982 | orchestrator | 2026-03-09 00:45:50 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:45:50.520019 | orchestrator | 2026-03-09 00:45:50 | INFO  | Task 8d3de0fe-88b6-4412-bfbb-e4a629a3f452 is in state STARTED 2026-03-09 00:45:50.521286 | orchestrator | 2026-03-09 00:45:50 | INFO  | Task 857aa086-68ee-4109-af1c-76646ffb6692 is in state STARTED 2026-03-09 00:45:50.521626 | orchestrator | 2026-03-09 00:45:50 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:45:50.522278 | orchestrator | 2026-03-09 00:45:50 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:45:50.522879 | orchestrator | 2026-03-09 00:45:50 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:45:50.523030 | orchestrator | 2026-03-09 00:45:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:45:53.575060 | orchestrator | 2026-03-09 00:45:53 | INFO  | Task a6a5cc27-4aa5-425d-8cca-d41f1e7a6ea4 is in state STARTED 2026-03-09 00:45:53.575154 | orchestrator | 2026-03-09 00:45:53 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:45:53.575167 | orchestrator | 2026-03-09 00:45:53 | INFO  | Task 8d3de0fe-88b6-4412-bfbb-e4a629a3f452 is in state STARTED 2026-03-09 00:45:53.575176 | orchestrator | 2026-03-09 00:45:53 | INFO  | Task 857aa086-68ee-4109-af1c-76646ffb6692 is in state STARTED 2026-03-09 00:45:53.575185 | orchestrator | 2026-03-09 00:45:53 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:45:53.575193 | orchestrator | 2026-03-09 00:45:53 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:45:53.575202 | orchestrator | 2026-03-09 00:45:53 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:45:53.575211 | orchestrator | 2026-03-09 00:45:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:45:56.604031 | orchestrator | 2026-03-09 00:45:56 | INFO  | Task a6a5cc27-4aa5-425d-8cca-d41f1e7a6ea4 is in state STARTED 2026-03-09 00:45:56.604309 | orchestrator | 2026-03-09 00:45:56 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:45:56.606964 | orchestrator | 2026-03-09 00:45:56 | INFO  | Task 8d3de0fe-88b6-4412-bfbb-e4a629a3f452 is in state STARTED 2026-03-09 00:45:56.607476 | orchestrator | 2026-03-09 00:45:56 | INFO  | Task 857aa086-68ee-4109-af1c-76646ffb6692 is in state STARTED 2026-03-09 00:45:56.608396 | orchestrator | 2026-03-09 00:45:56 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:45:56.609061 | orchestrator | 2026-03-09 00:45:56 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:45:56.609621 | orchestrator | 2026-03-09 00:45:56 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:45:56.609647 | orchestrator | 2026-03-09 00:45:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:45:59.640332 | orchestrator | 2026-03-09 00:45:59 | INFO  | Task a6a5cc27-4aa5-425d-8cca-d41f1e7a6ea4 is in state STARTED 2026-03-09 00:45:59.645415 | orchestrator | 2026-03-09 00:45:59 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:45:59.645553 | orchestrator | 2026-03-09 00:45:59 | INFO  | Task 8d3de0fe-88b6-4412-bfbb-e4a629a3f452 is in state STARTED 2026-03-09 00:45:59.645570 | orchestrator | 2026-03-09 00:45:59 | INFO  | Task 857aa086-68ee-4109-af1c-76646ffb6692 is in state STARTED 2026-03-09 00:45:59.647457 | orchestrator | 2026-03-09 00:45:59 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:45:59.647565 | orchestrator | 2026-03-09 00:45:59 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:45:59.648747 | orchestrator | 2026-03-09 00:45:59 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:45:59.648825 | orchestrator | 2026-03-09 00:45:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:46:02.721224 | orchestrator | 2026-03-09 00:46:02 | INFO  | Task a6a5cc27-4aa5-425d-8cca-d41f1e7a6ea4 is in state STARTED 2026-03-09 00:46:02.724925 | orchestrator | 2026-03-09 00:46:02 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:46:02.725937 | orchestrator | 2026-03-09 00:46:02 | INFO  | Task 8d3de0fe-88b6-4412-bfbb-e4a629a3f452 is in state STARTED 2026-03-09 00:46:02.729900 | orchestrator | 2026-03-09 00:46:02 | INFO  | Task 857aa086-68ee-4109-af1c-76646ffb6692 is in state STARTED 2026-03-09 00:46:02.729994 | orchestrator | 2026-03-09 00:46:02 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:46:02.730380 | orchestrator | 2026-03-09 00:46:02 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:46:02.734445 | orchestrator | 2026-03-09 00:46:02 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:46:02.734482 | orchestrator | 2026-03-09 00:46:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:46:05.791216 | orchestrator | 2026-03-09 00:46:05 | INFO  | Task a6a5cc27-4aa5-425d-8cca-d41f1e7a6ea4 is in state STARTED 2026-03-09 00:46:05.791303 | orchestrator | 2026-03-09 00:46:05 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:46:05.791314 | orchestrator | 2026-03-09 00:46:05 | INFO  | Task 8d3de0fe-88b6-4412-bfbb-e4a629a3f452 is in state STARTED 2026-03-09 00:46:05.791323 | orchestrator | 2026-03-09 00:46:05 | INFO  | Task 857aa086-68ee-4109-af1c-76646ffb6692 is in state STARTED 2026-03-09 00:46:05.791332 | orchestrator | 2026-03-09 00:46:05 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:46:05.793078 | orchestrator | 2026-03-09 00:46:05 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:46:05.793122 | orchestrator | 2026-03-09 00:46:05 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:46:05.793135 | orchestrator | 2026-03-09 00:46:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:46:09.252828 | orchestrator | 2026-03-09 00:46:09 | INFO  | Task a6a5cc27-4aa5-425d-8cca-d41f1e7a6ea4 is in state STARTED 2026-03-09 00:46:09.252922 | orchestrator | 2026-03-09 00:46:09 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:46:09.252940 | orchestrator | 2026-03-09 00:46:09 | INFO  | Task 8d3de0fe-88b6-4412-bfbb-e4a629a3f452 is in state STARTED 2026-03-09 00:46:09.252960 | orchestrator | 2026-03-09 00:46:09 | INFO  | Task 857aa086-68ee-4109-af1c-76646ffb6692 is in state STARTED 2026-03-09 00:46:09.252980 | orchestrator | 2026-03-09 00:46:09 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:46:09.253001 | orchestrator | 2026-03-09 00:46:09 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:46:09.253021 | orchestrator | 2026-03-09 00:46:09 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:46:09.253042 | orchestrator | 2026-03-09 00:46:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:46:12.403149 | orchestrator | 2026-03-09 00:46:12 | INFO  | Task a6a5cc27-4aa5-425d-8cca-d41f1e7a6ea4 is in state STARTED 2026-03-09 00:46:12.405003 | orchestrator | 2026-03-09 00:46:12 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:46:12.407186 | orchestrator | 2026-03-09 00:46:12 | INFO  | Task 8d3de0fe-88b6-4412-bfbb-e4a629a3f452 is in state SUCCESS 2026-03-09 00:46:12.407473 | orchestrator | 2026-03-09 00:46:12.407606 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-09 00:46:12.407620 | orchestrator | 2026-03-09 00:46:12.407632 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-09 00:46:12.407644 | orchestrator | Monday 09 March 2026 00:45:57 +0000 (0:00:00.798) 0:00:00.798 ********** 2026-03-09 00:46:12.407655 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:46:12.407667 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:46:12.407679 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:46:12.407690 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:46:12.407737 | orchestrator | changed: [testbed-manager] 2026-03-09 00:46:12.407749 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:46:12.407760 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:46:12.407770 | orchestrator | 2026-03-09 00:46:12.407782 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-09 00:46:12.407792 | orchestrator | Monday 09 March 2026 00:46:02 +0000 (0:00:05.237) 0:00:06.036 ********** 2026-03-09 00:46:12.407804 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-09 00:46:12.407816 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-09 00:46:12.407827 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-09 00:46:12.407838 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-09 00:46:12.407848 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-09 00:46:12.407859 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-09 00:46:12.407870 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-09 00:46:12.407881 | orchestrator | 2026-03-09 00:46:12.407892 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-09 00:46:12.407903 | orchestrator | Monday 09 March 2026 00:46:04 +0000 (0:00:01.614) 0:00:07.651 ********** 2026-03-09 00:46:12.407918 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-09 00:46:03.406079', 'end': '2026-03-09 00:46:03.415467', 'delta': '0:00:00.009388', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-09 00:46:12.407940 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-09 00:46:03.410683', 'end': '2026-03-09 00:46:03.417137', 'delta': '0:00:00.006454', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-09 00:46:12.407965 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-09 00:46:03.387117', 'end': '2026-03-09 00:46:03.396347', 'delta': '0:00:00.009230', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-09 00:46:12.408005 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-09 00:46:03.334946', 'end': '2026-03-09 00:46:03.339036', 'delta': '0:00:00.004090', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-09 00:46:12.408031 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-09 00:46:03.484931', 'end': '2026-03-09 00:46:03.493882', 'delta': '0:00:00.008951', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-09 00:46:12.408045 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-09 00:46:03.614988', 'end': '2026-03-09 00:46:03.622048', 'delta': '0:00:00.007060', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-09 00:46:12.408059 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-09 00:46:03.845947', 'end': '2026-03-09 00:46:03.853911', 'delta': '0:00:00.007964', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-09 00:46:12.408162 | orchestrator | 2026-03-09 00:46:12.408177 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-09 00:46:12.408190 | orchestrator | Monday 09 March 2026 00:46:07 +0000 (0:00:02.800) 0:00:10.452 ********** 2026-03-09 00:46:12.408204 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-09 00:46:12.408218 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-09 00:46:12.408229 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-09 00:46:12.408240 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-09 00:46:12.408251 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-09 00:46:12.408265 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-09 00:46:12.408284 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-09 00:46:12.408303 | orchestrator | 2026-03-09 00:46:12.408321 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-09 00:46:12.408351 | orchestrator | Monday 09 March 2026 00:46:09 +0000 (0:00:02.501) 0:00:12.953 ********** 2026-03-09 00:46:12.408367 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-09 00:46:12.408385 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-09 00:46:12.408402 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-09 00:46:12.408419 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-09 00:46:12.408437 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-09 00:46:12.408455 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-09 00:46:12.408475 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-09 00:46:12.408525 | orchestrator | 2026-03-09 00:46:12.408545 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:46:12.408574 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:46:12.408588 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:46:12.408616 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:46:12.408628 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:46:12.408639 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:46:12.408650 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:46:12.408661 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:46:12.408672 | orchestrator | 2026-03-09 00:46:12.408683 | orchestrator | 2026-03-09 00:46:12.408694 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:46:12.408705 | orchestrator | Monday 09 March 2026 00:46:11 +0000 (0:00:02.067) 0:00:15.020 ********** 2026-03-09 00:46:12.408716 | orchestrator | =============================================================================== 2026-03-09 00:46:12.408727 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 5.24s 2026-03-09 00:46:12.408738 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.80s 2026-03-09 00:46:12.408749 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.50s 2026-03-09 00:46:12.408760 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.07s 2026-03-09 00:46:12.408771 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.61s 2026-03-09 00:46:12.410579 | orchestrator | 2026-03-09 00:46:12 | INFO  | Task 857aa086-68ee-4109-af1c-76646ffb6692 is in state STARTED 2026-03-09 00:46:12.414480 | orchestrator | 2026-03-09 00:46:12 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:46:12.420213 | orchestrator | 2026-03-09 00:46:12 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:46:12.422228 | orchestrator | 2026-03-09 00:46:12 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:46:12.422621 | orchestrator | 2026-03-09 00:46:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:46:15.575125 | orchestrator | 2026-03-09 00:46:15 | INFO  | Task a6a5cc27-4aa5-425d-8cca-d41f1e7a6ea4 is in state STARTED 2026-03-09 00:46:15.575228 | orchestrator | 2026-03-09 00:46:15 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:46:15.575278 | orchestrator | 2026-03-09 00:46:15 | INFO  | Task 857aa086-68ee-4109-af1c-76646ffb6692 is in state STARTED 2026-03-09 00:46:15.575300 | orchestrator | 2026-03-09 00:46:15 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:46:15.575321 | orchestrator | 2026-03-09 00:46:15 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:46:15.575341 | orchestrator | 2026-03-09 00:46:15 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:46:15.575363 | orchestrator | 2026-03-09 00:46:15 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:46:15.575377 | orchestrator | 2026-03-09 00:46:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:46:18.599461 | orchestrator | 2026-03-09 00:46:18 | INFO  | Task a6a5cc27-4aa5-425d-8cca-d41f1e7a6ea4 is in state STARTED 2026-03-09 00:46:18.599615 | orchestrator | 2026-03-09 00:46:18 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:46:18.599638 | orchestrator | 2026-03-09 00:46:18 | INFO  | Task 857aa086-68ee-4109-af1c-76646ffb6692 is in state STARTED 2026-03-09 00:46:18.599653 | orchestrator | 2026-03-09 00:46:18 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:46:18.599667 | orchestrator | 2026-03-09 00:46:18 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:46:18.599681 | orchestrator | 2026-03-09 00:46:18 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:46:18.599696 | orchestrator | 2026-03-09 00:46:18 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:46:18.599711 | orchestrator | 2026-03-09 00:46:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:46:21.575817 | orchestrator | 2026-03-09 00:46:21 | INFO  | Task a6a5cc27-4aa5-425d-8cca-d41f1e7a6ea4 is in state STARTED 2026-03-09 00:46:21.576963 | orchestrator | 2026-03-09 00:46:21 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:46:21.577598 | orchestrator | 2026-03-09 00:46:21 | INFO  | Task 857aa086-68ee-4109-af1c-76646ffb6692 is in state STARTED 2026-03-09 00:46:21.578154 | orchestrator | 2026-03-09 00:46:21 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:46:21.579086 | orchestrator | 2026-03-09 00:46:21 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:46:21.581581 | orchestrator | 2026-03-09 00:46:21 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:46:21.581605 | orchestrator | 2026-03-09 00:46:21 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:46:21.581617 | orchestrator | 2026-03-09 00:46:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:46:24.620875 | orchestrator | 2026-03-09 00:46:24 | INFO  | Task a6a5cc27-4aa5-425d-8cca-d41f1e7a6ea4 is in state STARTED 2026-03-09 00:46:24.621989 | orchestrator | 2026-03-09 00:46:24 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:46:24.622060 | orchestrator | 2026-03-09 00:46:24 | INFO  | Task 857aa086-68ee-4109-af1c-76646ffb6692 is in state STARTED 2026-03-09 00:46:24.623182 | orchestrator | 2026-03-09 00:46:24 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:46:24.624533 | orchestrator | 2026-03-09 00:46:24 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:46:24.624565 | orchestrator | 2026-03-09 00:46:24 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:46:24.624607 | orchestrator | 2026-03-09 00:46:24 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:46:24.624619 | orchestrator | 2026-03-09 00:46:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:46:27.728229 | orchestrator | 2026-03-09 00:46:27 | INFO  | Task a6a5cc27-4aa5-425d-8cca-d41f1e7a6ea4 is in state STARTED 2026-03-09 00:46:27.728742 | orchestrator | 2026-03-09 00:46:27 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:46:27.729642 | orchestrator | 2026-03-09 00:46:27 | INFO  | Task 857aa086-68ee-4109-af1c-76646ffb6692 is in state STARTED 2026-03-09 00:46:27.730165 | orchestrator | 2026-03-09 00:46:27 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:46:27.731130 | orchestrator | 2026-03-09 00:46:27 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:46:27.731831 | orchestrator | 2026-03-09 00:46:27 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:46:27.732926 | orchestrator | 2026-03-09 00:46:27 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:46:27.732968 | orchestrator | 2026-03-09 00:46:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:46:30.779355 | orchestrator | 2026-03-09 00:46:30 | INFO  | Task a6a5cc27-4aa5-425d-8cca-d41f1e7a6ea4 is in state STARTED 2026-03-09 00:46:30.780675 | orchestrator | 2026-03-09 00:46:30 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:46:30.784707 | orchestrator | 2026-03-09 00:46:30 | INFO  | Task 857aa086-68ee-4109-af1c-76646ffb6692 is in state STARTED 2026-03-09 00:46:30.785212 | orchestrator | 2026-03-09 00:46:30 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:46:30.785771 | orchestrator | 2026-03-09 00:46:30 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:46:30.787639 | orchestrator | 2026-03-09 00:46:30 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:46:30.790358 | orchestrator | 2026-03-09 00:46:30 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:46:30.792414 | orchestrator | 2026-03-09 00:46:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:46:33.860034 | orchestrator | 2026-03-09 00:46:33 | INFO  | Task a6a5cc27-4aa5-425d-8cca-d41f1e7a6ea4 is in state STARTED 2026-03-09 00:46:33.860178 | orchestrator | 2026-03-09 00:46:33 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:46:33.860197 | orchestrator | 2026-03-09 00:46:33 | INFO  | Task 857aa086-68ee-4109-af1c-76646ffb6692 is in state STARTED 2026-03-09 00:46:33.860209 | orchestrator | 2026-03-09 00:46:33 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:46:33.860220 | orchestrator | 2026-03-09 00:46:33 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:46:33.860232 | orchestrator | 2026-03-09 00:46:33 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:46:33.860243 | orchestrator | 2026-03-09 00:46:33 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:46:33.860254 | orchestrator | 2026-03-09 00:46:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:46:36.885069 | orchestrator | 2026-03-09 00:46:36 | INFO  | Task a6a5cc27-4aa5-425d-8cca-d41f1e7a6ea4 is in state SUCCESS 2026-03-09 00:46:36.885208 | orchestrator | 2026-03-09 00:46:36 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:46:36.885270 | orchestrator | 2026-03-09 00:46:36 | INFO  | Task 857aa086-68ee-4109-af1c-76646ffb6692 is in state STARTED 2026-03-09 00:46:36.885291 | orchestrator | 2026-03-09 00:46:36 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:46:36.889170 | orchestrator | 2026-03-09 00:46:36 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:46:36.891611 | orchestrator | 2026-03-09 00:46:36 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:46:36.896641 | orchestrator | 2026-03-09 00:46:36 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:46:36.896687 | orchestrator | 2026-03-09 00:46:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:46:39.923442 | orchestrator | 2026-03-09 00:46:39 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:46:39.924892 | orchestrator | 2026-03-09 00:46:39 | INFO  | Task 857aa086-68ee-4109-af1c-76646ffb6692 is in state STARTED 2026-03-09 00:46:39.925185 | orchestrator | 2026-03-09 00:46:39 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:46:39.926933 | orchestrator | 2026-03-09 00:46:39 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:46:39.928528 | orchestrator | 2026-03-09 00:46:39 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:46:39.929872 | orchestrator | 2026-03-09 00:46:39 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:46:39.934840 | orchestrator | 2026-03-09 00:46:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:46:42.971901 | orchestrator | 2026-03-09 00:46:42 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:46:42.980831 | orchestrator | 2026-03-09 00:46:42 | INFO  | Task 857aa086-68ee-4109-af1c-76646ffb6692 is in state STARTED 2026-03-09 00:46:42.980911 | orchestrator | 2026-03-09 00:46:42 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:46:42.995564 | orchestrator | 2026-03-09 00:46:42 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:46:42.995628 | orchestrator | 2026-03-09 00:46:42 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:46:42.995638 | orchestrator | 2026-03-09 00:46:42 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:46:42.995647 | orchestrator | 2026-03-09 00:46:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:46:46.033410 | orchestrator | 2026-03-09 00:46:46 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:46:46.033998 | orchestrator | 2026-03-09 00:46:46 | INFO  | Task 857aa086-68ee-4109-af1c-76646ffb6692 is in state SUCCESS 2026-03-09 00:46:46.036275 | orchestrator | 2026-03-09 00:46:46 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:46:46.039107 | orchestrator | 2026-03-09 00:46:46 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:46:46.041070 | orchestrator | 2026-03-09 00:46:46 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:46:46.042717 | orchestrator | 2026-03-09 00:46:46 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:46:46.043301 | orchestrator | 2026-03-09 00:46:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:46:49.096341 | orchestrator | 2026-03-09 00:46:49 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:46:49.097151 | orchestrator | 2026-03-09 00:46:49 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:46:49.098883 | orchestrator | 2026-03-09 00:46:49 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:46:49.100276 | orchestrator | 2026-03-09 00:46:49 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:46:49.101854 | orchestrator | 2026-03-09 00:46:49 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:46:49.101903 | orchestrator | 2026-03-09 00:46:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:46:52.155936 | orchestrator | 2026-03-09 00:46:52 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:46:52.159667 | orchestrator | 2026-03-09 00:46:52 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:46:52.162424 | orchestrator | 2026-03-09 00:46:52 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:46:52.168276 | orchestrator | 2026-03-09 00:46:52 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:46:52.168931 | orchestrator | 2026-03-09 00:46:52 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:46:52.169689 | orchestrator | 2026-03-09 00:46:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:46:55.337265 | orchestrator | 2026-03-09 00:46:55 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:46:55.337358 | orchestrator | 2026-03-09 00:46:55 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:46:55.337372 | orchestrator | 2026-03-09 00:46:55 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:46:55.337383 | orchestrator | 2026-03-09 00:46:55 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:46:55.337413 | orchestrator | 2026-03-09 00:46:55 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:46:55.337424 | orchestrator | 2026-03-09 00:46:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:46:58.451725 | orchestrator | 2026-03-09 00:46:58 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:46:58.454458 | orchestrator | 2026-03-09 00:46:58 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:46:58.454799 | orchestrator | 2026-03-09 00:46:58 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:46:58.455590 | orchestrator | 2026-03-09 00:46:58 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:46:58.457199 | orchestrator | 2026-03-09 00:46:58 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:46:58.457234 | orchestrator | 2026-03-09 00:46:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:01.541937 | orchestrator | 2026-03-09 00:47:01 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:47:01.542010 | orchestrator | 2026-03-09 00:47:01 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:47:01.542061 | orchestrator | 2026-03-09 00:47:01 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:47:01.542069 | orchestrator | 2026-03-09 00:47:01 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:47:01.542076 | orchestrator | 2026-03-09 00:47:01 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:47:01.542108 | orchestrator | 2026-03-09 00:47:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:04.591519 | orchestrator | 2026-03-09 00:47:04 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:47:04.596729 | orchestrator | 2026-03-09 00:47:04 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:47:04.600182 | orchestrator | 2026-03-09 00:47:04 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:47:04.601966 | orchestrator | 2026-03-09 00:47:04 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:47:04.601994 | orchestrator | 2026-03-09 00:47:04 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:47:04.602003 | orchestrator | 2026-03-09 00:47:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:07.653843 | orchestrator | 2026-03-09 00:47:07 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:47:07.653947 | orchestrator | 2026-03-09 00:47:07 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:47:07.655008 | orchestrator | 2026-03-09 00:47:07 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:47:07.655937 | orchestrator | 2026-03-09 00:47:07 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:47:07.657343 | orchestrator | 2026-03-09 00:47:07 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:47:07.658269 | orchestrator | 2026-03-09 00:47:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:10.715140 | orchestrator | 2026-03-09 00:47:10 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:47:10.719264 | orchestrator | 2026-03-09 00:47:10 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:47:10.724288 | orchestrator | 2026-03-09 00:47:10 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:47:10.725157 | orchestrator | 2026-03-09 00:47:10 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:47:10.726261 | orchestrator | 2026-03-09 00:47:10 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:47:10.726804 | orchestrator | 2026-03-09 00:47:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:13.768681 | orchestrator | 2026-03-09 00:47:13 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:47:13.770937 | orchestrator | 2026-03-09 00:47:13 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:47:13.772088 | orchestrator | 2026-03-09 00:47:13 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:47:13.774112 | orchestrator | 2026-03-09 00:47:13 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:47:13.776944 | orchestrator | 2026-03-09 00:47:13 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:47:13.776987 | orchestrator | 2026-03-09 00:47:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:16.837039 | orchestrator | 2026-03-09 00:47:16 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:47:16.838713 | orchestrator | 2026-03-09 00:47:16 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:47:16.839391 | orchestrator | 2026-03-09 00:47:16 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:47:16.839978 | orchestrator | 2026-03-09 00:47:16 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:47:16.841561 | orchestrator | 2026-03-09 00:47:16 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:47:16.841602 | orchestrator | 2026-03-09 00:47:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:19.893298 | orchestrator | 2026-03-09 00:47:19 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:47:19.893398 | orchestrator | 2026-03-09 00:47:19 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:47:19.894326 | orchestrator | 2026-03-09 00:47:19 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:47:19.896404 | orchestrator | 2026-03-09 00:47:19 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:47:19.897411 | orchestrator | 2026-03-09 00:47:19 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:47:19.897540 | orchestrator | 2026-03-09 00:47:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:22.942530 | orchestrator | 2026-03-09 00:47:22 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:47:22.943003 | orchestrator | 2026-03-09 00:47:22 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:47:22.943820 | orchestrator | 2026-03-09 00:47:22 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:47:22.945622 | orchestrator | 2026-03-09 00:47:22 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state STARTED 2026-03-09 00:47:22.946403 | orchestrator | 2026-03-09 00:47:22 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:47:22.946449 | orchestrator | 2026-03-09 00:47:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:25.996986 | orchestrator | 2026-03-09 00:47:25 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:47:25.998986 | orchestrator | 2026-03-09 00:47:25 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:47:26.000086 | orchestrator | 2026-03-09 00:47:26 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:47:26.001002 | orchestrator | 2026-03-09 00:47:26 | INFO  | Task 4b2e33c8-6630-4d96-b431-b34311607a86 is in state SUCCESS 2026-03-09 00:47:26.002396 | orchestrator | 2026-03-09 00:47:26.002479 | orchestrator | 2026-03-09 00:47:26.002502 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-09 00:47:26.002519 | orchestrator | 2026-03-09 00:47:26.002537 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-09 00:47:26.002553 | orchestrator | Monday 09 March 2026 00:46:00 +0000 (0:00:00.328) 0:00:00.328 ********** 2026-03-09 00:47:26.002571 | orchestrator | ok: [testbed-manager] => { 2026-03-09 00:47:26.002590 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-09 00:47:26.002607 | orchestrator | } 2026-03-09 00:47:26.002621 | orchestrator | 2026-03-09 00:47:26.002631 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-09 00:47:26.002641 | orchestrator | Monday 09 March 2026 00:46:00 +0000 (0:00:00.211) 0:00:00.539 ********** 2026-03-09 00:47:26.002651 | orchestrator | ok: [testbed-manager] 2026-03-09 00:47:26.002663 | orchestrator | 2026-03-09 00:47:26.002673 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-09 00:47:26.002683 | orchestrator | Monday 09 March 2026 00:46:01 +0000 (0:00:01.423) 0:00:01.962 ********** 2026-03-09 00:47:26.002693 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-09 00:47:26.002721 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-09 00:47:26.002731 | orchestrator | 2026-03-09 00:47:26.002742 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-09 00:47:26.002752 | orchestrator | Monday 09 March 2026 00:46:03 +0000 (0:00:01.297) 0:00:03.260 ********** 2026-03-09 00:47:26.002761 | orchestrator | changed: [testbed-manager] 2026-03-09 00:47:26.002771 | orchestrator | 2026-03-09 00:47:26.002781 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-09 00:47:26.002791 | orchestrator | Monday 09 March 2026 00:46:06 +0000 (0:00:03.038) 0:00:06.298 ********** 2026-03-09 00:47:26.002816 | orchestrator | changed: [testbed-manager] 2026-03-09 00:47:26.002832 | orchestrator | 2026-03-09 00:47:26.002848 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-09 00:47:26.002864 | orchestrator | Monday 09 March 2026 00:46:07 +0000 (0:00:01.485) 0:00:07.784 ********** 2026-03-09 00:47:26.002880 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-09 00:47:26.002896 | orchestrator | ok: [testbed-manager] 2026-03-09 00:47:26.002911 | orchestrator | 2026-03-09 00:47:26.002928 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-09 00:47:26.002944 | orchestrator | Monday 09 March 2026 00:46:33 +0000 (0:00:26.257) 0:00:34.042 ********** 2026-03-09 00:47:26.002965 | orchestrator | changed: [testbed-manager] 2026-03-09 00:47:26.002991 | orchestrator | 2026-03-09 00:47:26.003014 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:47:26.003032 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:47:26.003050 | orchestrator | 2026-03-09 00:47:26.003083 | orchestrator | 2026-03-09 00:47:26.003115 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:47:26.003139 | orchestrator | Monday 09 March 2026 00:46:36 +0000 (0:00:02.483) 0:00:36.525 ********** 2026-03-09 00:47:26.003157 | orchestrator | =============================================================================== 2026-03-09 00:47:26.003176 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.26s 2026-03-09 00:47:26.003193 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.04s 2026-03-09 00:47:26.003210 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.48s 2026-03-09 00:47:26.003227 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.49s 2026-03-09 00:47:26.003245 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.42s 2026-03-09 00:47:26.003260 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.30s 2026-03-09 00:47:26.003277 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.21s 2026-03-09 00:47:26.003294 | orchestrator | 2026-03-09 00:47:26.003309 | orchestrator | 2026-03-09 00:47:26.003325 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-09 00:47:26.003341 | orchestrator | 2026-03-09 00:47:26.003356 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-09 00:47:26.003373 | orchestrator | Monday 09 March 2026 00:45:57 +0000 (0:00:00.743) 0:00:00.743 ********** 2026-03-09 00:47:26.003390 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-09 00:47:26.003408 | orchestrator | 2026-03-09 00:47:26.003424 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-09 00:47:26.003442 | orchestrator | Monday 09 March 2026 00:45:58 +0000 (0:00:00.975) 0:00:01.719 ********** 2026-03-09 00:47:26.003483 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-09 00:47:26.003501 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-09 00:47:26.003518 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-09 00:47:26.003550 | orchestrator | 2026-03-09 00:47:26.003569 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-09 00:47:26.003586 | orchestrator | Monday 09 March 2026 00:46:00 +0000 (0:00:01.992) 0:00:03.711 ********** 2026-03-09 00:47:26.003604 | orchestrator | changed: [testbed-manager] 2026-03-09 00:47:26.003621 | orchestrator | 2026-03-09 00:47:26.003638 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-09 00:47:26.003656 | orchestrator | Monday 09 March 2026 00:46:03 +0000 (0:00:02.506) 0:00:06.217 ********** 2026-03-09 00:47:26.003693 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-09 00:47:26.003709 | orchestrator | ok: [testbed-manager] 2026-03-09 00:47:26.003726 | orchestrator | 2026-03-09 00:47:26.003744 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-09 00:47:26.003761 | orchestrator | Monday 09 March 2026 00:46:38 +0000 (0:00:34.755) 0:00:40.972 ********** 2026-03-09 00:47:26.003778 | orchestrator | changed: [testbed-manager] 2026-03-09 00:47:26.003796 | orchestrator | 2026-03-09 00:47:26.003813 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-09 00:47:26.003831 | orchestrator | Monday 09 March 2026 00:46:39 +0000 (0:00:00.932) 0:00:41.904 ********** 2026-03-09 00:47:26.003849 | orchestrator | ok: [testbed-manager] 2026-03-09 00:47:26.003867 | orchestrator | 2026-03-09 00:47:26.003883 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-09 00:47:26.003900 | orchestrator | Monday 09 March 2026 00:46:39 +0000 (0:00:00.787) 0:00:42.692 ********** 2026-03-09 00:47:26.003915 | orchestrator | changed: [testbed-manager] 2026-03-09 00:47:26.003930 | orchestrator | 2026-03-09 00:47:26.003946 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-09 00:47:26.003962 | orchestrator | Monday 09 March 2026 00:46:41 +0000 (0:00:01.390) 0:00:44.082 ********** 2026-03-09 00:47:26.003978 | orchestrator | changed: [testbed-manager] 2026-03-09 00:47:26.003994 | orchestrator | 2026-03-09 00:47:26.004010 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-09 00:47:26.004026 | orchestrator | Monday 09 March 2026 00:46:41 +0000 (0:00:00.700) 0:00:44.783 ********** 2026-03-09 00:47:26.004042 | orchestrator | changed: [testbed-manager] 2026-03-09 00:47:26.004057 | orchestrator | 2026-03-09 00:47:26.004073 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-09 00:47:26.004090 | orchestrator | Monday 09 March 2026 00:46:43 +0000 (0:00:01.226) 0:00:46.010 ********** 2026-03-09 00:47:26.004105 | orchestrator | ok: [testbed-manager] 2026-03-09 00:47:26.004122 | orchestrator | 2026-03-09 00:47:26.004147 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:47:26.004164 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:47:26.004181 | orchestrator | 2026-03-09 00:47:26.004197 | orchestrator | 2026-03-09 00:47:26.004213 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:47:26.004229 | orchestrator | Monday 09 March 2026 00:46:43 +0000 (0:00:00.398) 0:00:46.408 ********** 2026-03-09 00:47:26.004245 | orchestrator | =============================================================================== 2026-03-09 00:47:26.004261 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.75s 2026-03-09 00:47:26.004277 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.51s 2026-03-09 00:47:26.004293 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.99s 2026-03-09 00:47:26.004309 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.39s 2026-03-09 00:47:26.004325 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.23s 2026-03-09 00:47:26.004340 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.97s 2026-03-09 00:47:26.004356 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.93s 2026-03-09 00:47:26.004384 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.79s 2026-03-09 00:47:26.004401 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.70s 2026-03-09 00:47:26.004418 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.40s 2026-03-09 00:47:26.004435 | orchestrator | 2026-03-09 00:47:26.004499 | orchestrator | 2026-03-09 00:47:26.004518 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:47:26.004537 | orchestrator | 2026-03-09 00:47:26.004554 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 00:47:26.004571 | orchestrator | Monday 09 March 2026 00:45:57 +0000 (0:00:00.642) 0:00:00.642 ********** 2026-03-09 00:47:26.004588 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-09 00:47:26.004605 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-09 00:47:26.004622 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-09 00:47:26.004638 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-09 00:47:26.004655 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-09 00:47:26.004672 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-09 00:47:26.004688 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-09 00:47:26.004704 | orchestrator | 2026-03-09 00:47:26.004721 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-09 00:47:26.004738 | orchestrator | 2026-03-09 00:47:26.004754 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-09 00:47:26.004772 | orchestrator | Monday 09 March 2026 00:46:00 +0000 (0:00:02.135) 0:00:02.778 ********** 2026-03-09 00:47:26.004803 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:47:26.004824 | orchestrator | 2026-03-09 00:47:26.004842 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-09 00:47:26.004859 | orchestrator | Monday 09 March 2026 00:46:01 +0000 (0:00:01.331) 0:00:04.109 ********** 2026-03-09 00:47:26.004874 | orchestrator | ok: [testbed-manager] 2026-03-09 00:47:26.004889 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:47:26.004907 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:47:26.004923 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:47:26.004941 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:47:26.004971 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:47:26.004990 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:47:26.005008 | orchestrator | 2026-03-09 00:47:26.005025 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-09 00:47:26.005041 | orchestrator | Monday 09 March 2026 00:46:03 +0000 (0:00:01.889) 0:00:05.998 ********** 2026-03-09 00:47:26.005057 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:47:26.005075 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:47:26.005092 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:47:26.005111 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:47:26.005129 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:47:26.005147 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:47:26.005165 | orchestrator | ok: [testbed-manager] 2026-03-09 00:47:26.005183 | orchestrator | 2026-03-09 00:47:26.005202 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-09 00:47:26.005220 | orchestrator | Monday 09 March 2026 00:46:06 +0000 (0:00:03.215) 0:00:09.213 ********** 2026-03-09 00:47:26.005239 | orchestrator | changed: [testbed-manager] 2026-03-09 00:47:26.005256 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:47:26.005273 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:47:26.005290 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:47:26.005307 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:47:26.005334 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:47:26.005351 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:47:26.005370 | orchestrator | 2026-03-09 00:47:26.005386 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-09 00:47:26.005404 | orchestrator | Monday 09 March 2026 00:46:08 +0000 (0:00:02.250) 0:00:11.464 ********** 2026-03-09 00:47:26.005420 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:47:26.005437 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:47:26.005474 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:47:26.005492 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:47:26.005508 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:47:26.005524 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:47:26.005540 | orchestrator | changed: [testbed-manager] 2026-03-09 00:47:26.005558 | orchestrator | 2026-03-09 00:47:26.005583 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-09 00:47:26.005601 | orchestrator | Monday 09 March 2026 00:46:20 +0000 (0:00:12.138) 0:00:23.602 ********** 2026-03-09 00:47:26.005868 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:47:26.005887 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:47:26.005905 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:47:26.005922 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:47:26.005940 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:47:26.005958 | orchestrator | changed: [testbed-manager] 2026-03-09 00:47:26.005977 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:47:26.005995 | orchestrator | 2026-03-09 00:47:26.006013 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-09 00:47:26.006112 | orchestrator | Monday 09 March 2026 00:46:55 +0000 (0:00:34.168) 0:00:57.770 ********** 2026-03-09 00:47:26.006131 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:47:26.006151 | orchestrator | 2026-03-09 00:47:26.006169 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-09 00:47:26.006188 | orchestrator | Monday 09 March 2026 00:46:56 +0000 (0:00:01.145) 0:00:58.915 ********** 2026-03-09 00:47:26.006206 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-09 00:47:26.006225 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-09 00:47:26.006243 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-09 00:47:26.006261 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-09 00:47:26.006279 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-09 00:47:26.006297 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-09 00:47:26.006316 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-09 00:47:26.006334 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-09 00:47:26.006352 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-09 00:47:26.006371 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-09 00:47:26.006389 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-09 00:47:26.006407 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-09 00:47:26.006426 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-09 00:47:26.006444 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-09 00:47:26.006484 | orchestrator | 2026-03-09 00:47:26.006504 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-09 00:47:26.006524 | orchestrator | Monday 09 March 2026 00:47:02 +0000 (0:00:06.231) 0:01:05.147 ********** 2026-03-09 00:47:26.006544 | orchestrator | ok: [testbed-manager] 2026-03-09 00:47:26.006563 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:47:26.006582 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:47:26.006602 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:47:26.006636 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:47:26.006654 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:47:26.006673 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:47:26.006692 | orchestrator | 2026-03-09 00:47:26.006711 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-09 00:47:26.006730 | orchestrator | Monday 09 March 2026 00:47:03 +0000 (0:00:01.333) 0:01:06.480 ********** 2026-03-09 00:47:26.006748 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:47:26.006765 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:47:26.006783 | orchestrator | changed: [testbed-manager] 2026-03-09 00:47:26.006801 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:47:26.006818 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:47:26.006835 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:47:26.006850 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:47:26.006867 | orchestrator | 2026-03-09 00:47:26.006883 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-09 00:47:26.006913 | orchestrator | Monday 09 March 2026 00:47:05 +0000 (0:00:02.002) 0:01:08.483 ********** 2026-03-09 00:47:26.006930 | orchestrator | ok: [testbed-manager] 2026-03-09 00:47:26.006946 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:47:26.006963 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:47:26.006980 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:47:26.006998 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:47:26.007015 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:47:26.007031 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:47:26.007047 | orchestrator | 2026-03-09 00:47:26.007063 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-09 00:47:26.007079 | orchestrator | Monday 09 March 2026 00:47:07 +0000 (0:00:02.078) 0:01:10.562 ********** 2026-03-09 00:47:26.007097 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:47:26.007115 | orchestrator | ok: [testbed-manager] 2026-03-09 00:47:26.007132 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:47:26.007147 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:47:26.007163 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:47:26.007180 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:47:26.007198 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:47:26.007216 | orchestrator | 2026-03-09 00:47:26.007234 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-09 00:47:26.007252 | orchestrator | Monday 09 March 2026 00:47:10 +0000 (0:00:02.689) 0:01:13.251 ********** 2026-03-09 00:47:26.007271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-09 00:47:26.007291 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:47:26.007310 | orchestrator | 2026-03-09 00:47:26.007328 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-09 00:47:26.007343 | orchestrator | Monday 09 March 2026 00:47:12 +0000 (0:00:01.530) 0:01:14.782 ********** 2026-03-09 00:47:26.007359 | orchestrator | changed: [testbed-manager] 2026-03-09 00:47:26.007375 | orchestrator | 2026-03-09 00:47:26.007391 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-09 00:47:26.007407 | orchestrator | Monday 09 March 2026 00:47:14 +0000 (0:00:02.079) 0:01:16.862 ********** 2026-03-09 00:47:26.007423 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:47:26.007440 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:47:26.007483 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:47:26.007501 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:47:26.007520 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:47:26.007537 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:47:26.007556 | orchestrator | changed: [testbed-manager] 2026-03-09 00:47:26.007574 | orchestrator | 2026-03-09 00:47:26.007587 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:47:26.007598 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:47:26.007619 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:47:26.007629 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:47:26.007640 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:47:26.007650 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:47:26.007660 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:47:26.007706 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:47:26.007718 | orchestrator | 2026-03-09 00:47:26.007728 | orchestrator | 2026-03-09 00:47:26.007737 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:47:26.007747 | orchestrator | Monday 09 March 2026 00:47:25 +0000 (0:00:11.415) 0:01:28.277 ********** 2026-03-09 00:47:26.007757 | orchestrator | =============================================================================== 2026-03-09 00:47:26.007767 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 34.17s 2026-03-09 00:47:26.007776 | orchestrator | osism.services.netdata : Add repository -------------------------------- 12.14s 2026-03-09 00:47:26.007786 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.42s 2026-03-09 00:47:26.007796 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.23s 2026-03-09 00:47:26.007805 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.22s 2026-03-09 00:47:26.007815 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.69s 2026-03-09 00:47:26.007825 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.25s 2026-03-09 00:47:26.007836 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.14s 2026-03-09 00:47:26.007853 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.08s 2026-03-09 00:47:26.007876 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.08s 2026-03-09 00:47:26.007891 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.00s 2026-03-09 00:47:26.007915 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.89s 2026-03-09 00:47:26.007930 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.53s 2026-03-09 00:47:26.007944 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.33s 2026-03-09 00:47:26.007957 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.33s 2026-03-09 00:47:26.007970 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.15s 2026-03-09 00:47:26.007983 | orchestrator | 2026-03-09 00:47:26 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:47:26.007996 | orchestrator | 2026-03-09 00:47:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:29.059534 | orchestrator | 2026-03-09 00:47:29 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:47:29.059599 | orchestrator | 2026-03-09 00:47:29 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:47:29.059607 | orchestrator | 2026-03-09 00:47:29 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:47:29.062232 | orchestrator | 2026-03-09 00:47:29 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:47:29.062257 | orchestrator | 2026-03-09 00:47:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:32.122008 | orchestrator | 2026-03-09 00:47:32 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:47:32.124180 | orchestrator | 2026-03-09 00:47:32 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:47:32.126759 | orchestrator | 2026-03-09 00:47:32 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:47:32.132764 | orchestrator | 2026-03-09 00:47:32 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:47:32.133747 | orchestrator | 2026-03-09 00:47:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:35.213648 | orchestrator | 2026-03-09 00:47:35 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:47:35.214102 | orchestrator | 2026-03-09 00:47:35 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:47:35.216940 | orchestrator | 2026-03-09 00:47:35 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:47:35.220811 | orchestrator | 2026-03-09 00:47:35 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:47:35.220885 | orchestrator | 2026-03-09 00:47:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:38.292550 | orchestrator | 2026-03-09 00:47:38 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:47:38.297230 | orchestrator | 2026-03-09 00:47:38 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:47:38.306779 | orchestrator | 2026-03-09 00:47:38 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:47:38.310300 | orchestrator | 2026-03-09 00:47:38 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state STARTED 2026-03-09 00:47:38.310369 | orchestrator | 2026-03-09 00:47:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:41.355264 | orchestrator | 2026-03-09 00:47:41 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:47:41.357061 | orchestrator | 2026-03-09 00:47:41 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:47:41.359008 | orchestrator | 2026-03-09 00:47:41 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:47:41.360316 | orchestrator | 2026-03-09 00:47:41 | INFO  | Task 40d8d2f4-ebae-49aa-ac8b-383f1952abf7 is in state SUCCESS 2026-03-09 00:47:41.361142 | orchestrator | 2026-03-09 00:47:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:44.401227 | orchestrator | 2026-03-09 00:47:44 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:47:44.403769 | orchestrator | 2026-03-09 00:47:44 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:47:44.405936 | orchestrator | 2026-03-09 00:47:44 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:47:44.405993 | orchestrator | 2026-03-09 00:47:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:47.459073 | orchestrator | 2026-03-09 00:47:47 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:47:47.460259 | orchestrator | 2026-03-09 00:47:47 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:47:47.465070 | orchestrator | 2026-03-09 00:47:47 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:47:47.465154 | orchestrator | 2026-03-09 00:47:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:50.501303 | orchestrator | 2026-03-09 00:47:50 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:47:50.502247 | orchestrator | 2026-03-09 00:47:50 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:47:50.503109 | orchestrator | 2026-03-09 00:47:50 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:47:50.503149 | orchestrator | 2026-03-09 00:47:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:53.550505 | orchestrator | 2026-03-09 00:47:53 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:47:53.553847 | orchestrator | 2026-03-09 00:47:53 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:47:53.553916 | orchestrator | 2026-03-09 00:47:53 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:47:53.553927 | orchestrator | 2026-03-09 00:47:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:56.602679 | orchestrator | 2026-03-09 00:47:56 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:47:56.604193 | orchestrator | 2026-03-09 00:47:56 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:47:56.606598 | orchestrator | 2026-03-09 00:47:56 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:47:56.606650 | orchestrator | 2026-03-09 00:47:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:47:59.652767 | orchestrator | 2026-03-09 00:47:59 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:47:59.654546 | orchestrator | 2026-03-09 00:47:59 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:47:59.657221 | orchestrator | 2026-03-09 00:47:59 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:47:59.657292 | orchestrator | 2026-03-09 00:47:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:02.706375 | orchestrator | 2026-03-09 00:48:02 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:48:02.707271 | orchestrator | 2026-03-09 00:48:02 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:48:02.709449 | orchestrator | 2026-03-09 00:48:02 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:48:02.709497 | orchestrator | 2026-03-09 00:48:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:05.752129 | orchestrator | 2026-03-09 00:48:05 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:48:05.755050 | orchestrator | 2026-03-09 00:48:05 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:48:05.756662 | orchestrator | 2026-03-09 00:48:05 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:48:05.756704 | orchestrator | 2026-03-09 00:48:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:08.814093 | orchestrator | 2026-03-09 00:48:08 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:48:08.814785 | orchestrator | 2026-03-09 00:48:08 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:48:08.816038 | orchestrator | 2026-03-09 00:48:08 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:48:08.816386 | orchestrator | 2026-03-09 00:48:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:11.858387 | orchestrator | 2026-03-09 00:48:11 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:48:11.860038 | orchestrator | 2026-03-09 00:48:11 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:48:11.862548 | orchestrator | 2026-03-09 00:48:11 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:48:11.862598 | orchestrator | 2026-03-09 00:48:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:14.902119 | orchestrator | 2026-03-09 00:48:14 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:48:14.902395 | orchestrator | 2026-03-09 00:48:14 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state STARTED 2026-03-09 00:48:14.904804 | orchestrator | 2026-03-09 00:48:14 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:48:14.904865 | orchestrator | 2026-03-09 00:48:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:17.944320 | orchestrator | 2026-03-09 00:48:17 | INFO  | Task a4f75d62-8ce1-42a4-a1bc-548760b3a8e1 is in state STARTED 2026-03-09 00:48:17.946111 | orchestrator | 2026-03-09 00:48:17 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:48:17.954317 | orchestrator | 2026-03-09 00:48:17 | INFO  | Task 7a55c0fa-c4b4-4bdb-9023-2c6f497adf6a is in state SUCCESS 2026-03-09 00:48:17.957052 | orchestrator | 2026-03-09 00:48:17.957097 | orchestrator | 2026-03-09 00:48:17.957106 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-09 00:48:17.957114 | orchestrator | 2026-03-09 00:48:17.957121 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-09 00:48:17.957128 | orchestrator | Monday 09 March 2026 00:46:16 +0000 (0:00:00.197) 0:00:00.197 ********** 2026-03-09 00:48:17.957135 | orchestrator | ok: [testbed-manager] 2026-03-09 00:48:17.957143 | orchestrator | 2026-03-09 00:48:17.957149 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-09 00:48:17.957156 | orchestrator | Monday 09 March 2026 00:46:17 +0000 (0:00:00.994) 0:00:01.191 ********** 2026-03-09 00:48:17.957163 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-09 00:48:17.957170 | orchestrator | 2026-03-09 00:48:17.957176 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-09 00:48:17.957183 | orchestrator | Monday 09 March 2026 00:46:18 +0000 (0:00:00.704) 0:00:01.896 ********** 2026-03-09 00:48:17.957200 | orchestrator | changed: [testbed-manager] 2026-03-09 00:48:17.957211 | orchestrator | 2026-03-09 00:48:17.957221 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-09 00:48:17.957231 | orchestrator | Monday 09 March 2026 00:46:19 +0000 (0:00:01.331) 0:00:03.227 ********** 2026-03-09 00:48:17.957241 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-09 00:48:17.957251 | orchestrator | ok: [testbed-manager] 2026-03-09 00:48:17.957261 | orchestrator | 2026-03-09 00:48:17.957270 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-09 00:48:17.957281 | orchestrator | Monday 09 March 2026 00:47:27 +0000 (0:01:07.744) 0:01:10.972 ********** 2026-03-09 00:48:17.957291 | orchestrator | changed: [testbed-manager] 2026-03-09 00:48:17.957302 | orchestrator | 2026-03-09 00:48:17.957312 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:48:17.957323 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:48:17.957335 | orchestrator | 2026-03-09 00:48:17.957346 | orchestrator | 2026-03-09 00:48:17.957356 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:48:17.957388 | orchestrator | Monday 09 March 2026 00:47:38 +0000 (0:00:11.047) 0:01:22.020 ********** 2026-03-09 00:48:17.957398 | orchestrator | =============================================================================== 2026-03-09 00:48:17.957408 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 67.74s 2026-03-09 00:48:17.957436 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 11.05s 2026-03-09 00:48:17.957448 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.33s 2026-03-09 00:48:17.957459 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.00s 2026-03-09 00:48:17.957470 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.70s 2026-03-09 00:48:17.957481 | orchestrator | 2026-03-09 00:48:17.957491 | orchestrator | 2026-03-09 00:48:17.957501 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-09 00:48:17.957511 | orchestrator | 2026-03-09 00:48:17.957522 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-09 00:48:17.957532 | orchestrator | Monday 09 March 2026 00:45:50 +0000 (0:00:00.251) 0:00:00.251 ********** 2026-03-09 00:48:17.957538 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:48:17.957546 | orchestrator | 2026-03-09 00:48:17.957553 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-09 00:48:17.957559 | orchestrator | Monday 09 March 2026 00:45:51 +0000 (0:00:01.411) 0:00:01.663 ********** 2026-03-09 00:48:17.957565 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-09 00:48:17.957572 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-09 00:48:17.957578 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-09 00:48:17.957584 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-09 00:48:17.957590 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-09 00:48:17.957597 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-09 00:48:17.957604 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-09 00:48:17.957610 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-09 00:48:17.957616 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-09 00:48:17.957622 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-09 00:48:17.957630 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-09 00:48:17.957637 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-09 00:48:17.957644 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-09 00:48:17.957651 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-09 00:48:17.957658 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-09 00:48:17.957666 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-09 00:48:17.957706 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-09 00:48:17.957714 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-09 00:48:17.957722 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-09 00:48:17.957730 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-09 00:48:17.957737 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-09 00:48:17.957752 | orchestrator | 2026-03-09 00:48:17.957759 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-09 00:48:17.957766 | orchestrator | Monday 09 March 2026 00:45:56 +0000 (0:00:04.328) 0:00:05.992 ********** 2026-03-09 00:48:17.957778 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:48:17.957786 | orchestrator | 2026-03-09 00:48:17.957793 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-09 00:48:17.957801 | orchestrator | Monday 09 March 2026 00:45:57 +0000 (0:00:01.412) 0:00:07.404 ********** 2026-03-09 00:48:17.957811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.957823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.957830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.957837 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.957845 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.957853 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.957876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.957892 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.957900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.957907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.957915 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.957922 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.957944 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.957961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.957982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.957990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.957997 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.958004 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.958010 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.958072 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.958080 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.958086 | orchestrator | 2026-03-09 00:48:17.958093 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-09 00:48:17.958105 | orchestrator | Monday 09 March 2026 00:46:03 +0000 (0:00:06.176) 0:00:13.580 ********** 2026-03-09 00:48:17.958135 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:48:17.958144 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958150 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958165 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:48:17.958172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:48:17.958179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958192 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:48:17.958199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:48:17.958210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958232 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:48:17.958241 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:48:17.958248 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958255 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958262 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:48:17.958268 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:48:17.958275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958292 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:48:17.958299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:48:17.958311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958328 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:48:17.958334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:48:17.958341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958348 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958354 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:48:17.958361 | orchestrator | 2026-03-09 00:48:17.958367 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-09 00:48:17.958377 | orchestrator | Monday 09 March 2026 00:46:05 +0000 (0:00:01.672) 0:00:15.253 ********** 2026-03-09 00:48:17.958384 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:48:17.958391 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958401 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958408 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:48:17.958445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:48:17.958453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:48:17.958460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958473 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:48:17.958484 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:48:17.958491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958524 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:48:17.958531 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:48:17.958537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:48:17.958544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958550 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958561 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:48:17.958567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:48:17.958574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958591 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:48:17.958597 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-09 00:48:17.958606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958613 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.958620 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:48:17.958626 | orchestrator | 2026-03-09 00:48:17.958632 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-09 00:48:17.958639 | orchestrator | Monday 09 March 2026 00:46:08 +0000 (0:00:03.337) 0:00:18.590 ********** 2026-03-09 00:48:17.958649 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:48:17.958655 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:48:17.958662 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:48:17.958668 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:48:17.958674 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:48:17.958680 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:48:17.958686 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:48:17.958693 | orchestrator | 2026-03-09 00:48:17.958699 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-09 00:48:17.958705 | orchestrator | Monday 09 March 2026 00:46:10 +0000 (0:00:01.949) 0:00:20.540 ********** 2026-03-09 00:48:17.958712 | orchestrator | skipping: [testbed-manager] 2026-03-09 00:48:17.958718 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:48:17.958724 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:48:17.958730 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:48:17.958736 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:48:17.958742 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:48:17.958748 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:48:17.958755 | orchestrator | 2026-03-09 00:48:17.958761 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-09 00:48:17.958767 | orchestrator | Monday 09 March 2026 00:46:12 +0000 (0:00:01.545) 0:00:22.085 ********** 2026-03-09 00:48:17.958773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.958780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.958792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.958802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.958809 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.958819 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.958826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.958832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.958849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.958856 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.958874 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.958881 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.958892 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.958972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.958980 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.958987 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.958993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.959011 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.959018 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.959028 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.959040 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.959047 | orchestrator | 2026-03-09 00:48:17.959053 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-09 00:48:17.959060 | orchestrator | Monday 09 March 2026 00:46:20 +0000 (0:00:07.971) 0:00:30.057 ********** 2026-03-09 00:48:17.959066 | orchestrator | [WARNING]: Skipped 2026-03-09 00:48:17.959073 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-09 00:48:17.959079 | orchestrator | to this access issue: 2026-03-09 00:48:17.959086 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-09 00:48:17.959092 | orchestrator | directory 2026-03-09 00:48:17.959098 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 00:48:17.959105 | orchestrator | 2026-03-09 00:48:17.959111 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-09 00:48:17.959118 | orchestrator | Monday 09 March 2026 00:46:21 +0000 (0:00:01.685) 0:00:31.742 ********** 2026-03-09 00:48:17.959124 | orchestrator | [WARNING]: Skipped 2026-03-09 00:48:17.959130 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-09 00:48:17.959136 | orchestrator | to this access issue: 2026-03-09 00:48:17.959143 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-09 00:48:17.959149 | orchestrator | directory 2026-03-09 00:48:17.959155 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 00:48:17.959162 | orchestrator | 2026-03-09 00:48:17.959168 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-09 00:48:17.959175 | orchestrator | Monday 09 March 2026 00:46:22 +0000 (0:00:00.841) 0:00:32.583 ********** 2026-03-09 00:48:17.959181 | orchestrator | [WARNING]: Skipped 2026-03-09 00:48:17.959187 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-09 00:48:17.959193 | orchestrator | to this access issue: 2026-03-09 00:48:17.959200 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-09 00:48:17.959206 | orchestrator | directory 2026-03-09 00:48:17.959212 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 00:48:17.959218 | orchestrator | 2026-03-09 00:48:17.959225 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-09 00:48:17.959231 | orchestrator | Monday 09 March 2026 00:46:23 +0000 (0:00:01.126) 0:00:33.710 ********** 2026-03-09 00:48:17.959237 | orchestrator | [WARNING]: Skipped 2026-03-09 00:48:17.959243 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-09 00:48:17.959250 | orchestrator | to this access issue: 2026-03-09 00:48:17.959256 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-09 00:48:17.959263 | orchestrator | directory 2026-03-09 00:48:17.959269 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 00:48:17.959275 | orchestrator | 2026-03-09 00:48:17.959281 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-09 00:48:17.959288 | orchestrator | Monday 09 March 2026 00:46:26 +0000 (0:00:02.297) 0:00:36.008 ********** 2026-03-09 00:48:17.959294 | orchestrator | changed: [testbed-manager] 2026-03-09 00:48:17.959300 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:48:17.959306 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:48:17.959313 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:48:17.959323 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:48:17.959329 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:48:17.959335 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:48:17.959341 | orchestrator | 2026-03-09 00:48:17.959348 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-09 00:48:17.959354 | orchestrator | Monday 09 March 2026 00:46:30 +0000 (0:00:04.187) 0:00:40.195 ********** 2026-03-09 00:48:17.959361 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-09 00:48:17.959367 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-09 00:48:17.959373 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-09 00:48:17.959383 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-09 00:48:17.959389 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-09 00:48:17.959396 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-09 00:48:17.959402 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-09 00:48:17.959408 | orchestrator | 2026-03-09 00:48:17.959414 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-09 00:48:17.959507 | orchestrator | Monday 09 March 2026 00:46:32 +0000 (0:00:02.400) 0:00:42.596 ********** 2026-03-09 00:48:17.959515 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:48:17.959521 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:48:17.959531 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:48:17.959537 | orchestrator | changed: [testbed-manager] 2026-03-09 00:48:17.959544 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:48:17.959550 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:48:17.959556 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:48:17.959562 | orchestrator | 2026-03-09 00:48:17.959569 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-09 00:48:17.959575 | orchestrator | Monday 09 March 2026 00:46:36 +0000 (0:00:03.640) 0:00:46.236 ********** 2026-03-09 00:48:17.959582 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.959589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.959595 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.959607 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.959616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.959631 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.959643 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.959651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.959658 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.959665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.959677 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.959688 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.959696 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.959708 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.959719 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.959727 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.959734 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.959742 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:48:17.959754 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.959761 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.959769 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.959776 | orchestrator | 2026-03-09 00:48:17.959784 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-09 00:48:17.959792 | orchestrator | Monday 09 March 2026 00:46:38 +0000 (0:00:02.207) 0:00:48.444 ********** 2026-03-09 00:48:17.959799 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-09 00:48:17.959806 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-09 00:48:17.959814 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-09 00:48:17.959827 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-09 00:48:17.959835 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-09 00:48:17.959842 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-09 00:48:17.959849 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-09 00:48:17.959856 | orchestrator | 2026-03-09 00:48:17.959863 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-09 00:48:17.959871 | orchestrator | Monday 09 March 2026 00:46:41 +0000 (0:00:02.977) 0:00:51.421 ********** 2026-03-09 00:48:17.959878 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-09 00:48:17.959888 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-09 00:48:17.959896 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-09 00:48:17.959903 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-09 00:48:17.959911 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-09 00:48:17.959918 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-09 00:48:17.959925 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-09 00:48:17.959932 | orchestrator | 2026-03-09 00:48:17.959938 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-09 00:48:17.959944 | orchestrator | Monday 09 March 2026 00:46:44 +0000 (0:00:03.026) 0:00:54.448 ********** 2026-03-09 00:48:17.959951 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.959961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.959968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.959975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.959982 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.959992 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.960002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.960009 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.960020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.960027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.960033 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-09 00:48:17.960040 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.960051 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.960061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.960067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.960080 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.960087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.960094 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.960101 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.960107 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.960114 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:48:17.960120 | orchestrator | 2026-03-09 00:48:17.960132 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-09 00:48:17.960142 | orchestrator | Monday 09 March 2026 00:46:47 +0000 (0:00:03.280) 0:00:57.729 ********** 2026-03-09 00:48:17.960153 | orchestrator | changed: [testbed-manager] 2026-03-09 00:48:17.960163 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:48:17.960173 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:48:17.960183 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:48:17.960192 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:48:17.960202 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:48:17.960212 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:48:17.960222 | orchestrator | 2026-03-09 00:48:17.960231 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-09 00:48:17.960240 | orchestrator | Monday 09 March 2026 00:46:49 +0000 (0:00:01.756) 0:00:59.485 ********** 2026-03-09 00:48:17.960257 | orchestrator | changed: [testbed-manager] 2026-03-09 00:48:17.960266 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:48:17.960275 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:48:17.960285 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:48:17.960299 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:48:17.960309 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:48:17.960317 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:48:17.960326 | orchestrator | 2026-03-09 00:48:17.960336 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-09 00:48:17.960345 | orchestrator | Monday 09 March 2026 00:46:50 +0000 (0:00:01.365) 0:01:00.851 ********** 2026-03-09 00:48:17.960355 | orchestrator | 2026-03-09 00:48:17.960366 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-09 00:48:17.960376 | orchestrator | Monday 09 March 2026 00:46:50 +0000 (0:00:00.070) 0:01:00.922 ********** 2026-03-09 00:48:17.960386 | orchestrator | 2026-03-09 00:48:17.960397 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-09 00:48:17.960407 | orchestrator | Monday 09 March 2026 00:46:51 +0000 (0:00:00.077) 0:01:01.000 ********** 2026-03-09 00:48:17.960466 | orchestrator | 2026-03-09 00:48:17.960479 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-09 00:48:17.960490 | orchestrator | Monday 09 March 2026 00:46:51 +0000 (0:00:00.367) 0:01:01.367 ********** 2026-03-09 00:48:17.960499 | orchestrator | 2026-03-09 00:48:17.960510 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-09 00:48:17.960521 | orchestrator | Monday 09 March 2026 00:46:51 +0000 (0:00:00.087) 0:01:01.454 ********** 2026-03-09 00:48:17.960531 | orchestrator | 2026-03-09 00:48:17.960541 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-09 00:48:17.960552 | orchestrator | Monday 09 March 2026 00:46:51 +0000 (0:00:00.084) 0:01:01.539 ********** 2026-03-09 00:48:17.960563 | orchestrator | 2026-03-09 00:48:17.960573 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-09 00:48:17.960584 | orchestrator | Monday 09 March 2026 00:46:51 +0000 (0:00:00.099) 0:01:01.638 ********** 2026-03-09 00:48:17.960594 | orchestrator | 2026-03-09 00:48:17.960604 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-09 00:48:17.960614 | orchestrator | Monday 09 March 2026 00:46:51 +0000 (0:00:00.203) 0:01:01.842 ********** 2026-03-09 00:48:17.960624 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:48:17.960635 | orchestrator | changed: [testbed-manager] 2026-03-09 00:48:17.960645 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:48:17.960655 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:48:17.960665 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:48:17.960676 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:48:17.960686 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:48:17.960696 | orchestrator | 2026-03-09 00:48:17.960706 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-09 00:48:17.960717 | orchestrator | Monday 09 March 2026 00:47:30 +0000 (0:00:39.068) 0:01:40.911 ********** 2026-03-09 00:48:17.960727 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:48:17.960738 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:48:17.960748 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:48:17.960758 | orchestrator | changed: [testbed-manager] 2026-03-09 00:48:17.960768 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:48:17.960779 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:48:17.960785 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:48:17.960791 | orchestrator | 2026-03-09 00:48:17.960798 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-09 00:48:17.960804 | orchestrator | Monday 09 March 2026 00:48:03 +0000 (0:00:32.719) 0:02:13.630 ********** 2026-03-09 00:48:17.960810 | orchestrator | ok: [testbed-manager] 2026-03-09 00:48:17.960817 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:48:17.960831 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:48:17.960837 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:48:17.960843 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:48:17.960849 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:48:17.960855 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:48:17.960861 | orchestrator | 2026-03-09 00:48:17.960868 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-09 00:48:17.960874 | orchestrator | Monday 09 March 2026 00:48:05 +0000 (0:00:02.314) 0:02:15.945 ********** 2026-03-09 00:48:17.960880 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:48:17.960886 | orchestrator | changed: [testbed-manager] 2026-03-09 00:48:17.960893 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:48:17.960903 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:48:17.960913 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:48:17.960919 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:48:17.960925 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:48:17.960931 | orchestrator | 2026-03-09 00:48:17.960937 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:48:17.960945 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-09 00:48:17.960952 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-09 00:48:17.960966 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-09 00:48:17.960973 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-09 00:48:17.960979 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-09 00:48:17.960986 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-09 00:48:17.960992 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-09 00:48:17.960998 | orchestrator | 2026-03-09 00:48:17.961004 | orchestrator | 2026-03-09 00:48:17.961011 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:48:17.961017 | orchestrator | Monday 09 March 2026 00:48:14 +0000 (0:00:08.847) 0:02:24.792 ********** 2026-03-09 00:48:17.961023 | orchestrator | =============================================================================== 2026-03-09 00:48:17.961029 | orchestrator | common : Restart fluentd container ------------------------------------- 39.07s 2026-03-09 00:48:17.961035 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 32.72s 2026-03-09 00:48:17.961041 | orchestrator | common : Restart cron container ----------------------------------------- 8.85s 2026-03-09 00:48:17.961048 | orchestrator | common : Copying over config.json files for services -------------------- 7.97s 2026-03-09 00:48:17.961054 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.18s 2026-03-09 00:48:17.961060 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.33s 2026-03-09 00:48:17.961066 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.19s 2026-03-09 00:48:17.961072 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.64s 2026-03-09 00:48:17.961079 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.34s 2026-03-09 00:48:17.961085 | orchestrator | common : Check common containers ---------------------------------------- 3.28s 2026-03-09 00:48:17.961091 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.03s 2026-03-09 00:48:17.961101 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.98s 2026-03-09 00:48:17.961108 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.40s 2026-03-09 00:48:17.961114 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.31s 2026-03-09 00:48:17.961120 | orchestrator | common : Find custom fluentd output config files ------------------------ 2.30s 2026-03-09 00:48:17.961126 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.21s 2026-03-09 00:48:17.961132 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.95s 2026-03-09 00:48:17.961144 | orchestrator | common : Creating log volume -------------------------------------------- 1.76s 2026-03-09 00:48:17.961151 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.69s 2026-03-09 00:48:17.961157 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.67s 2026-03-09 00:48:17.961165 | orchestrator | 2026-03-09 00:48:17 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:48:17.961374 | orchestrator | 2026-03-09 00:48:17 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:48:17.963704 | orchestrator | 2026-03-09 00:48:17 | INFO  | Task 1ec8c42f-28b7-4116-b97b-2d71238cb7a3 is in state STARTED 2026-03-09 00:48:17.965305 | orchestrator | 2026-03-09 00:48:17 | INFO  | Task 154f8af9-c083-40b4-b0bf-6a508539f7af is in state STARTED 2026-03-09 00:48:17.965367 | orchestrator | 2026-03-09 00:48:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:21.003514 | orchestrator | 2026-03-09 00:48:21 | INFO  | Task a4f75d62-8ce1-42a4-a1bc-548760b3a8e1 is in state STARTED 2026-03-09 00:48:21.005177 | orchestrator | 2026-03-09 00:48:21 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:48:21.005813 | orchestrator | 2026-03-09 00:48:21 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:48:21.008164 | orchestrator | 2026-03-09 00:48:21 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:48:21.009027 | orchestrator | 2026-03-09 00:48:21 | INFO  | Task 1ec8c42f-28b7-4116-b97b-2d71238cb7a3 is in state STARTED 2026-03-09 00:48:21.009589 | orchestrator | 2026-03-09 00:48:21 | INFO  | Task 154f8af9-c083-40b4-b0bf-6a508539f7af is in state STARTED 2026-03-09 00:48:21.009625 | orchestrator | 2026-03-09 00:48:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:24.047013 | orchestrator | 2026-03-09 00:48:24 | INFO  | Task a4f75d62-8ce1-42a4-a1bc-548760b3a8e1 is in state STARTED 2026-03-09 00:48:24.047098 | orchestrator | 2026-03-09 00:48:24 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:48:24.049236 | orchestrator | 2026-03-09 00:48:24 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:48:24.050851 | orchestrator | 2026-03-09 00:48:24 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:48:24.051222 | orchestrator | 2026-03-09 00:48:24 | INFO  | Task 1ec8c42f-28b7-4116-b97b-2d71238cb7a3 is in state STARTED 2026-03-09 00:48:24.052083 | orchestrator | 2026-03-09 00:48:24 | INFO  | Task 154f8af9-c083-40b4-b0bf-6a508539f7af is in state STARTED 2026-03-09 00:48:24.052119 | orchestrator | 2026-03-09 00:48:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:27.105903 | orchestrator | 2026-03-09 00:48:27 | INFO  | Task a4f75d62-8ce1-42a4-a1bc-548760b3a8e1 is in state STARTED 2026-03-09 00:48:27.106214 | orchestrator | 2026-03-09 00:48:27 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:48:27.106750 | orchestrator | 2026-03-09 00:48:27 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:48:27.107579 | orchestrator | 2026-03-09 00:48:27 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:48:27.107962 | orchestrator | 2026-03-09 00:48:27 | INFO  | Task 1ec8c42f-28b7-4116-b97b-2d71238cb7a3 is in state STARTED 2026-03-09 00:48:27.108695 | orchestrator | 2026-03-09 00:48:27 | INFO  | Task 154f8af9-c083-40b4-b0bf-6a508539f7af is in state STARTED 2026-03-09 00:48:27.108730 | orchestrator | 2026-03-09 00:48:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:30.151821 | orchestrator | 2026-03-09 00:48:30 | INFO  | Task a4f75d62-8ce1-42a4-a1bc-548760b3a8e1 is in state STARTED 2026-03-09 00:48:30.152514 | orchestrator | 2026-03-09 00:48:30 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:48:30.153442 | orchestrator | 2026-03-09 00:48:30 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:48:30.154480 | orchestrator | 2026-03-09 00:48:30 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:48:30.155474 | orchestrator | 2026-03-09 00:48:30 | INFO  | Task 1ec8c42f-28b7-4116-b97b-2d71238cb7a3 is in state STARTED 2026-03-09 00:48:30.156541 | orchestrator | 2026-03-09 00:48:30 | INFO  | Task 154f8af9-c083-40b4-b0bf-6a508539f7af is in state STARTED 2026-03-09 00:48:30.156579 | orchestrator | 2026-03-09 00:48:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:33.204227 | orchestrator | 2026-03-09 00:48:33 | INFO  | Task a4f75d62-8ce1-42a4-a1bc-548760b3a8e1 is in state STARTED 2026-03-09 00:48:33.205643 | orchestrator | 2026-03-09 00:48:33 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:48:33.206546 | orchestrator | 2026-03-09 00:48:33 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:48:33.208241 | orchestrator | 2026-03-09 00:48:33 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:48:33.210793 | orchestrator | 2026-03-09 00:48:33 | INFO  | Task 1ec8c42f-28b7-4116-b97b-2d71238cb7a3 is in state STARTED 2026-03-09 00:48:33.213009 | orchestrator | 2026-03-09 00:48:33 | INFO  | Task 154f8af9-c083-40b4-b0bf-6a508539f7af is in state STARTED 2026-03-09 00:48:33.213084 | orchestrator | 2026-03-09 00:48:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:36.263671 | orchestrator | 2026-03-09 00:48:36 | INFO  | Task a4f75d62-8ce1-42a4-a1bc-548760b3a8e1 is in state STARTED 2026-03-09 00:48:36.265877 | orchestrator | 2026-03-09 00:48:36 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:48:36.266368 | orchestrator | 2026-03-09 00:48:36 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:48:36.268781 | orchestrator | 2026-03-09 00:48:36 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:48:36.269822 | orchestrator | 2026-03-09 00:48:36 | INFO  | Task 1ec8c42f-28b7-4116-b97b-2d71238cb7a3 is in state STARTED 2026-03-09 00:48:36.270270 | orchestrator | 2026-03-09 00:48:36 | INFO  | Task 154f8af9-c083-40b4-b0bf-6a508539f7af is in state STARTED 2026-03-09 00:48:36.270298 | orchestrator | 2026-03-09 00:48:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:39.315176 | orchestrator | 2026-03-09 00:48:39 | INFO  | Task a4f75d62-8ce1-42a4-a1bc-548760b3a8e1 is in state STARTED 2026-03-09 00:48:39.315722 | orchestrator | 2026-03-09 00:48:39 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:48:39.316573 | orchestrator | 2026-03-09 00:48:39 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:48:39.318754 | orchestrator | 2026-03-09 00:48:39 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:48:39.319274 | orchestrator | 2026-03-09 00:48:39 | INFO  | Task 1ec8c42f-28b7-4116-b97b-2d71238cb7a3 is in state SUCCESS 2026-03-09 00:48:39.321858 | orchestrator | 2026-03-09 00:48:39 | INFO  | Task 154f8af9-c083-40b4-b0bf-6a508539f7af is in state STARTED 2026-03-09 00:48:39.322606 | orchestrator | 2026-03-09 00:48:39 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:48:39.322652 | orchestrator | 2026-03-09 00:48:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:42.397981 | orchestrator | 2026-03-09 00:48:42 | INFO  | Task a4f75d62-8ce1-42a4-a1bc-548760b3a8e1 is in state STARTED 2026-03-09 00:48:42.398720 | orchestrator | 2026-03-09 00:48:42 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:48:42.399357 | orchestrator | 2026-03-09 00:48:42 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:48:42.401230 | orchestrator | 2026-03-09 00:48:42 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:48:42.402218 | orchestrator | 2026-03-09 00:48:42 | INFO  | Task 154f8af9-c083-40b4-b0bf-6a508539f7af is in state STARTED 2026-03-09 00:48:42.403611 | orchestrator | 2026-03-09 00:48:42 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:48:42.403652 | orchestrator | 2026-03-09 00:48:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:45.474477 | orchestrator | 2026-03-09 00:48:45 | INFO  | Task a4f75d62-8ce1-42a4-a1bc-548760b3a8e1 is in state STARTED 2026-03-09 00:48:45.474757 | orchestrator | 2026-03-09 00:48:45 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:48:45.476027 | orchestrator | 2026-03-09 00:48:45 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:48:45.477144 | orchestrator | 2026-03-09 00:48:45 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:48:45.477632 | orchestrator | 2026-03-09 00:48:45 | INFO  | Task 154f8af9-c083-40b4-b0bf-6a508539f7af is in state STARTED 2026-03-09 00:48:45.478943 | orchestrator | 2026-03-09 00:48:45 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:48:45.478981 | orchestrator | 2026-03-09 00:48:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:48.504042 | orchestrator | 2026-03-09 00:48:48 | INFO  | Task a4f75d62-8ce1-42a4-a1bc-548760b3a8e1 is in state STARTED 2026-03-09 00:48:48.505123 | orchestrator | 2026-03-09 00:48:48 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:48:48.506729 | orchestrator | 2026-03-09 00:48:48 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:48:48.507263 | orchestrator | 2026-03-09 00:48:48 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:48:48.510871 | orchestrator | 2026-03-09 00:48:48 | INFO  | Task 154f8af9-c083-40b4-b0bf-6a508539f7af is in state SUCCESS 2026-03-09 00:48:48.511841 | orchestrator | 2026-03-09 00:48:48.511886 | orchestrator | 2026-03-09 00:48:48.511904 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:48:48.511919 | orchestrator | 2026-03-09 00:48:48.511933 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 00:48:48.511946 | orchestrator | Monday 09 March 2026 00:48:21 +0000 (0:00:00.410) 0:00:00.410 ********** 2026-03-09 00:48:48.511960 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:48:48.511975 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:48:48.512023 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:48:48.512041 | orchestrator | 2026-03-09 00:48:48.512057 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 00:48:48.512066 | orchestrator | Monday 09 March 2026 00:48:22 +0000 (0:00:00.507) 0:00:00.917 ********** 2026-03-09 00:48:48.512075 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-09 00:48:48.512086 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-09 00:48:48.512101 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-09 00:48:48.512115 | orchestrator | 2026-03-09 00:48:48.512129 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-09 00:48:48.512142 | orchestrator | 2026-03-09 00:48:48.512156 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-09 00:48:48.512170 | orchestrator | Monday 09 March 2026 00:48:23 +0000 (0:00:00.573) 0:00:01.490 ********** 2026-03-09 00:48:48.512184 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:48:48.512199 | orchestrator | 2026-03-09 00:48:48.512214 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-09 00:48:48.512229 | orchestrator | Monday 09 March 2026 00:48:24 +0000 (0:00:00.993) 0:00:02.483 ********** 2026-03-09 00:48:48.512245 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-09 00:48:48.512259 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-09 00:48:48.512274 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-09 00:48:48.512285 | orchestrator | 2026-03-09 00:48:48.512298 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-09 00:48:48.512316 | orchestrator | Monday 09 March 2026 00:48:24 +0000 (0:00:00.876) 0:00:03.360 ********** 2026-03-09 00:48:48.512338 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-09 00:48:48.512352 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-09 00:48:48.512366 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-09 00:48:48.512380 | orchestrator | 2026-03-09 00:48:48.512496 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-09 00:48:48.512510 | orchestrator | Monday 09 March 2026 00:48:26 +0000 (0:00:01.980) 0:00:05.340 ********** 2026-03-09 00:48:48.512520 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:48:48.512531 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:48:48.512539 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:48:48.512548 | orchestrator | 2026-03-09 00:48:48.512557 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-09 00:48:48.512566 | orchestrator | Monday 09 March 2026 00:48:28 +0000 (0:00:01.719) 0:00:07.060 ********** 2026-03-09 00:48:48.512574 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:48:48.512583 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:48:48.512592 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:48:48.512601 | orchestrator | 2026-03-09 00:48:48.512610 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:48:48.512620 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:48:48.512630 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:48:48.512640 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:48:48.512648 | orchestrator | 2026-03-09 00:48:48.512657 | orchestrator | 2026-03-09 00:48:48.512666 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:48:48.512675 | orchestrator | Monday 09 March 2026 00:48:36 +0000 (0:00:07.633) 0:00:14.693 ********** 2026-03-09 00:48:48.512684 | orchestrator | =============================================================================== 2026-03-09 00:48:48.512707 | orchestrator | memcached : Restart memcached container --------------------------------- 7.63s 2026-03-09 00:48:48.512728 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.98s 2026-03-09 00:48:48.512745 | orchestrator | memcached : Check memcached container ----------------------------------- 1.72s 2026-03-09 00:48:48.512759 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.99s 2026-03-09 00:48:48.512772 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.88s 2026-03-09 00:48:48.512786 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2026-03-09 00:48:48.512800 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.51s 2026-03-09 00:48:48.512816 | orchestrator | 2026-03-09 00:48:48.512832 | orchestrator | 2026-03-09 00:48:48.512850 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:48:48.512866 | orchestrator | 2026-03-09 00:48:48.512882 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 00:48:48.512892 | orchestrator | Monday 09 March 2026 00:48:22 +0000 (0:00:00.405) 0:00:00.405 ********** 2026-03-09 00:48:48.512900 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:48:48.512909 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:48:48.512918 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:48:48.512927 | orchestrator | 2026-03-09 00:48:48.512936 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 00:48:48.512960 | orchestrator | Monday 09 March 2026 00:48:23 +0000 (0:00:00.491) 0:00:00.897 ********** 2026-03-09 00:48:48.512970 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-09 00:48:48.512979 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-09 00:48:48.512987 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-09 00:48:48.512996 | orchestrator | 2026-03-09 00:48:48.513004 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-09 00:48:48.513013 | orchestrator | 2026-03-09 00:48:48.513022 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-09 00:48:48.513031 | orchestrator | Monday 09 March 2026 00:48:24 +0000 (0:00:00.813) 0:00:01.710 ********** 2026-03-09 00:48:48.513039 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:48:48.513048 | orchestrator | 2026-03-09 00:48:48.513057 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-09 00:48:48.513066 | orchestrator | Monday 09 March 2026 00:48:24 +0000 (0:00:00.649) 0:00:02.360 ********** 2026-03-09 00:48:48.513078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:48:48.513100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:48:48.513110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:48:48.513130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:48:48.513140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:48:48.513157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:48:48.513167 | orchestrator | 2026-03-09 00:48:48.513175 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-09 00:48:48.513185 | orchestrator | Monday 09 March 2026 00:48:26 +0000 (0:00:01.425) 0:00:03.786 ********** 2026-03-09 00:48:48.513194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:48:48.513209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:48:48.513239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:48:48.513265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:48:48.513278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:48:48.513300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:48:48.513316 | orchestrator | 2026-03-09 00:48:48.513330 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-09 00:48:48.513344 | orchestrator | Monday 09 March 2026 00:48:29 +0000 (0:00:02.860) 0:00:06.646 ********** 2026-03-09 00:48:48.513357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:48:48.513372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:48:48.513420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:48:48.513446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:48:48.513462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:48:48.513476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:48:48.513493 | orchestrator | 2026-03-09 00:48:48.513518 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-09 00:48:48.513533 | orchestrator | Monday 09 March 2026 00:48:31 +0000 (0:00:02.700) 0:00:09.346 ********** 2026-03-09 00:48:48.513549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:48:48.513564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:48:48.513581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-09 00:48:48.513613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:48:48.513627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:48:48.513642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-09 00:48:48.513655 | orchestrator | 2026-03-09 00:48:48.513669 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-09 00:48:48.513681 | orchestrator | Monday 09 March 2026 00:48:33 +0000 (0:00:01.739) 0:00:11.086 ********** 2026-03-09 00:48:48.513696 | orchestrator | 2026-03-09 00:48:48.513710 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-09 00:48:48.513730 | orchestrator | Monday 09 March 2026 00:48:33 +0000 (0:00:00.075) 0:00:11.161 ********** 2026-03-09 00:48:48.513743 | orchestrator | 2026-03-09 00:48:48.513756 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-09 00:48:48.513771 | orchestrator | Monday 09 March 2026 00:48:33 +0000 (0:00:00.065) 0:00:11.227 ********** 2026-03-09 00:48:48.513792 | orchestrator | 2026-03-09 00:48:48.513813 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-09 00:48:48.513834 | orchestrator | Monday 09 March 2026 00:48:33 +0000 (0:00:00.069) 0:00:11.296 ********** 2026-03-09 00:48:48.513853 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:48:48.513874 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:48:48.513892 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:48:48.513911 | orchestrator | 2026-03-09 00:48:48.513931 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-09 00:48:48.513950 | orchestrator | Monday 09 March 2026 00:48:37 +0000 (0:00:04.128) 0:00:15.424 ********** 2026-03-09 00:48:48.513995 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:48:48.514086 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:48:48.514112 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:48:48.514131 | orchestrator | 2026-03-09 00:48:48.514152 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:48:48.514172 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:48:48.514193 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:48:48.514213 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:48:48.514234 | orchestrator | 2026-03-09 00:48:48.514253 | orchestrator | 2026-03-09 00:48:48.514279 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:48:48.514304 | orchestrator | Monday 09 March 2026 00:48:45 +0000 (0:00:07.990) 0:00:23.414 ********** 2026-03-09 00:48:48.514330 | orchestrator | =============================================================================== 2026-03-09 00:48:48.514355 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 7.99s 2026-03-09 00:48:48.514373 | orchestrator | redis : Restart redis container ----------------------------------------- 4.13s 2026-03-09 00:48:48.514417 | orchestrator | redis : Copying over default config.json files -------------------------- 2.86s 2026-03-09 00:48:48.514441 | orchestrator | redis : Copying over redis config files --------------------------------- 2.70s 2026-03-09 00:48:48.514463 | orchestrator | redis : Check redis containers ------------------------------------------ 1.74s 2026-03-09 00:48:48.514488 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.43s 2026-03-09 00:48:48.514512 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.81s 2026-03-09 00:48:48.514538 | orchestrator | redis : include_tasks --------------------------------------------------- 0.65s 2026-03-09 00:48:48.514563 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.49s 2026-03-09 00:48:48.514587 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.21s 2026-03-09 00:48:48.514613 | orchestrator | 2026-03-09 00:48:48 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:48:48.514641 | orchestrator | 2026-03-09 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:51.547808 | orchestrator | 2026-03-09 00:48:51 | INFO  | Task a4f75d62-8ce1-42a4-a1bc-548760b3a8e1 is in state STARTED 2026-03-09 00:48:51.549692 | orchestrator | 2026-03-09 00:48:51 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:48:51.551799 | orchestrator | 2026-03-09 00:48:51 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:48:51.553287 | orchestrator | 2026-03-09 00:48:51 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:48:51.555702 | orchestrator | 2026-03-09 00:48:51 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:48:51.555747 | orchestrator | 2026-03-09 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:54.619603 | orchestrator | 2026-03-09 00:48:54 | INFO  | Task a4f75d62-8ce1-42a4-a1bc-548760b3a8e1 is in state STARTED 2026-03-09 00:48:54.619693 | orchestrator | 2026-03-09 00:48:54 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:48:54.623999 | orchestrator | 2026-03-09 00:48:54 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:48:54.624064 | orchestrator | 2026-03-09 00:48:54 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:48:54.630269 | orchestrator | 2026-03-09 00:48:54 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:48:54.630351 | orchestrator | 2026-03-09 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:48:57.693047 | orchestrator | 2026-03-09 00:48:57 | INFO  | Task a4f75d62-8ce1-42a4-a1bc-548760b3a8e1 is in state STARTED 2026-03-09 00:48:57.693149 | orchestrator | 2026-03-09 00:48:57 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:48:57.693162 | orchestrator | 2026-03-09 00:48:57 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:48:57.693172 | orchestrator | 2026-03-09 00:48:57 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:48:57.693182 | orchestrator | 2026-03-09 00:48:57 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:48:57.693191 | orchestrator | 2026-03-09 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:00.774349 | orchestrator | 2026-03-09 00:49:00 | INFO  | Task a4f75d62-8ce1-42a4-a1bc-548760b3a8e1 is in state STARTED 2026-03-09 00:49:00.774479 | orchestrator | 2026-03-09 00:49:00 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:49:00.774497 | orchestrator | 2026-03-09 00:49:00 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:49:00.774544 | orchestrator | 2026-03-09 00:49:00 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:49:00.774556 | orchestrator | 2026-03-09 00:49:00 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:49:00.774568 | orchestrator | 2026-03-09 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:04.273995 | orchestrator | 2026-03-09 00:49:04 | INFO  | Task a4f75d62-8ce1-42a4-a1bc-548760b3a8e1 is in state STARTED 2026-03-09 00:49:04.279758 | orchestrator | 2026-03-09 00:49:04 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:49:04.285494 | orchestrator | 2026-03-09 00:49:04 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:49:04.289770 | orchestrator | 2026-03-09 00:49:04 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:49:04.291347 | orchestrator | 2026-03-09 00:49:04 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:49:04.291406 | orchestrator | 2026-03-09 00:49:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:07.347824 | orchestrator | 2026-03-09 00:49:07 | INFO  | Task a4f75d62-8ce1-42a4-a1bc-548760b3a8e1 is in state STARTED 2026-03-09 00:49:07.347915 | orchestrator | 2026-03-09 00:49:07 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:49:07.348943 | orchestrator | 2026-03-09 00:49:07 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:49:07.350641 | orchestrator | 2026-03-09 00:49:07 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:49:07.352347 | orchestrator | 2026-03-09 00:49:07 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:49:07.352450 | orchestrator | 2026-03-09 00:49:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:10.418547 | orchestrator | 2026-03-09 00:49:10 | INFO  | Task a4f75d62-8ce1-42a4-a1bc-548760b3a8e1 is in state STARTED 2026-03-09 00:49:10.421583 | orchestrator | 2026-03-09 00:49:10 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:49:10.422434 | orchestrator | 2026-03-09 00:49:10 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:49:10.429057 | orchestrator | 2026-03-09 00:49:10 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:49:10.429788 | orchestrator | 2026-03-09 00:49:10 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:49:10.429826 | orchestrator | 2026-03-09 00:49:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:13.501252 | orchestrator | 2026-03-09 00:49:13 | INFO  | Task a4f75d62-8ce1-42a4-a1bc-548760b3a8e1 is in state STARTED 2026-03-09 00:49:13.501661 | orchestrator | 2026-03-09 00:49:13 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:49:13.513324 | orchestrator | 2026-03-09 00:49:13 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:49:13.517718 | orchestrator | 2026-03-09 00:49:13 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:49:13.518443 | orchestrator | 2026-03-09 00:49:13 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:49:13.518482 | orchestrator | 2026-03-09 00:49:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:16.565087 | orchestrator | 2026-03-09 00:49:16 | INFO  | Task a4f75d62-8ce1-42a4-a1bc-548760b3a8e1 is in state STARTED 2026-03-09 00:49:16.565607 | orchestrator | 2026-03-09 00:49:16 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:49:16.566546 | orchestrator | 2026-03-09 00:49:16 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:49:16.567357 | orchestrator | 2026-03-09 00:49:16 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:49:16.568444 | orchestrator | 2026-03-09 00:49:16 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:49:16.568528 | orchestrator | 2026-03-09 00:49:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:19.614635 | orchestrator | 2026-03-09 00:49:19 | INFO  | Task a4f75d62-8ce1-42a4-a1bc-548760b3a8e1 is in state STARTED 2026-03-09 00:49:19.615824 | orchestrator | 2026-03-09 00:49:19 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:49:19.617054 | orchestrator | 2026-03-09 00:49:19 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:49:19.618745 | orchestrator | 2026-03-09 00:49:19 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:49:19.620726 | orchestrator | 2026-03-09 00:49:19 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:49:19.620787 | orchestrator | 2026-03-09 00:49:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:22.675741 | orchestrator | 2026-03-09 00:49:22 | INFO  | Task a4f75d62-8ce1-42a4-a1bc-548760b3a8e1 is in state STARTED 2026-03-09 00:49:22.676073 | orchestrator | 2026-03-09 00:49:22 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:49:22.677036 | orchestrator | 2026-03-09 00:49:22 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:49:22.677790 | orchestrator | 2026-03-09 00:49:22 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:49:22.678644 | orchestrator | 2026-03-09 00:49:22 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:49:22.681110 | orchestrator | 2026-03-09 00:49:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:25.713255 | orchestrator | 2026-03-09 00:49:25 | INFO  | Task a4f75d62-8ce1-42a4-a1bc-548760b3a8e1 is in state STARTED 2026-03-09 00:49:25.713818 | orchestrator | 2026-03-09 00:49:25 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:49:25.714659 | orchestrator | 2026-03-09 00:49:25 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:49:25.715699 | orchestrator | 2026-03-09 00:49:25 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:49:25.716539 | orchestrator | 2026-03-09 00:49:25 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:49:25.716574 | orchestrator | 2026-03-09 00:49:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:28.799661 | orchestrator | 2026-03-09 00:49:28 | INFO  | Task a4f75d62-8ce1-42a4-a1bc-548760b3a8e1 is in state STARTED 2026-03-09 00:49:28.801941 | orchestrator | 2026-03-09 00:49:28 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:49:28.804406 | orchestrator | 2026-03-09 00:49:28 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:49:28.805535 | orchestrator | 2026-03-09 00:49:28 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:49:28.808960 | orchestrator | 2026-03-09 00:49:28 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:49:28.811863 | orchestrator | 2026-03-09 00:49:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:31.855182 | orchestrator | 2026-03-09 00:49:31.855253 | orchestrator | 2026-03-09 00:49:31.855260 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:49:31.855266 | orchestrator | 2026-03-09 00:49:31.855272 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 00:49:31.855278 | orchestrator | Monday 09 March 2026 00:48:22 +0000 (0:00:00.536) 0:00:00.536 ********** 2026-03-09 00:49:31.855284 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:49:31.855290 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:49:31.855295 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:49:31.855301 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:49:31.855306 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:49:31.855434 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:49:31.855442 | orchestrator | 2026-03-09 00:49:31.855447 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 00:49:31.855452 | orchestrator | Monday 09 March 2026 00:48:23 +0000 (0:00:00.798) 0:00:01.334 ********** 2026-03-09 00:49:31.855458 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-09 00:49:31.855464 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-09 00:49:31.855468 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-09 00:49:31.855473 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-09 00:49:31.855478 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-09 00:49:31.855484 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-09 00:49:31.855488 | orchestrator | 2026-03-09 00:49:31.855493 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-09 00:49:31.855498 | orchestrator | 2026-03-09 00:49:31.855503 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-09 00:49:31.855508 | orchestrator | Monday 09 March 2026 00:48:24 +0000 (0:00:00.885) 0:00:02.220 ********** 2026-03-09 00:49:31.855515 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:49:31.855521 | orchestrator | 2026-03-09 00:49:31.855526 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-09 00:49:31.855548 | orchestrator | Monday 09 March 2026 00:48:25 +0000 (0:00:01.616) 0:00:03.837 ********** 2026-03-09 00:49:31.855553 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-09 00:49:31.855558 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-09 00:49:31.855563 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-09 00:49:31.855567 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-09 00:49:31.855572 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-09 00:49:31.855577 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-09 00:49:31.855582 | orchestrator | 2026-03-09 00:49:31.855586 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-09 00:49:31.855592 | orchestrator | Monday 09 March 2026 00:48:27 +0000 (0:00:01.644) 0:00:05.481 ********** 2026-03-09 00:49:31.855597 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-09 00:49:31.855602 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-09 00:49:31.855607 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-09 00:49:31.855612 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-09 00:49:31.855645 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-09 00:49:31.855652 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-09 00:49:31.855656 | orchestrator | 2026-03-09 00:49:31.855661 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-09 00:49:31.855666 | orchestrator | Monday 09 March 2026 00:48:28 +0000 (0:00:01.532) 0:00:07.013 ********** 2026-03-09 00:49:31.855671 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-09 00:49:31.855676 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:49:31.855682 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-09 00:49:31.855686 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:49:31.855691 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-09 00:49:31.855696 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:49:31.855774 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-09 00:49:31.855781 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:49:31.855786 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-09 00:49:31.855792 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:49:31.855796 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-09 00:49:31.855801 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:49:31.855806 | orchestrator | 2026-03-09 00:49:31.855811 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-09 00:49:31.855816 | orchestrator | Monday 09 March 2026 00:48:30 +0000 (0:00:01.327) 0:00:08.341 ********** 2026-03-09 00:49:31.855820 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:49:31.855825 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:49:31.855830 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:49:31.855835 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:49:31.855840 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:49:31.855844 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:49:31.855849 | orchestrator | 2026-03-09 00:49:31.855854 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-09 00:49:31.855859 | orchestrator | Monday 09 March 2026 00:48:31 +0000 (0:00:00.809) 0:00:09.150 ********** 2026-03-09 00:49:31.855881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:49:31.855895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:49:31.855900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:49:31.855909 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:49:31.855915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:49:31.855950 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:49:31.855962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:49:31.855973 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:49:31.855979 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:49:31.855987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:49:31.855992 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856001 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856010 | orchestrator | 2026-03-09 00:49:31.856016 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-09 00:49:31.856021 | orchestrator | Monday 09 March 2026 00:48:32 +0000 (0:00:01.519) 0:00:10.669 ********** 2026-03-09 00:49:31.856026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856044 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856049 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856064 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856091 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856096 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856109 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856115 | orchestrator | 2026-03-09 00:49:31.856120 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-09 00:49:31.856125 | orchestrator | Monday 09 March 2026 00:48:35 +0000 (0:00:03.247) 0:00:13.917 ********** 2026-03-09 00:49:31.856130 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:49:31.856134 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:49:31.856139 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:49:31.856144 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:49:31.856149 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:49:31.856154 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:49:31.856158 | orchestrator | 2026-03-09 00:49:31.856163 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-09 00:49:31.856168 | orchestrator | Monday 09 March 2026 00:48:37 +0000 (0:00:01.197) 0:00:15.114 ********** 2026-03-09 00:49:31.856173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856204 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856209 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856227 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856239 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856248 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856253 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-09 00:49:31.856258 | orchestrator | 2026-03-09 00:49:31.856263 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-09 00:49:31.856268 | orchestrator | Monday 09 March 2026 00:48:40 +0000 (0:00:03.094) 0:00:18.208 ********** 2026-03-09 00:49:31.856273 | orchestrator | 2026-03-09 00:49:31.856278 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-09 00:49:31.856283 | orchestrator | Monday 09 March 2026 00:48:40 +0000 (0:00:00.140) 0:00:18.349 ********** 2026-03-09 00:49:31.856288 | orchestrator | 2026-03-09 00:49:31.856292 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-09 00:49:31.856297 | orchestrator | Monday 09 March 2026 00:48:40 +0000 (0:00:00.130) 0:00:18.480 ********** 2026-03-09 00:49:31.856302 | orchestrator | 2026-03-09 00:49:31.856307 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-09 00:49:31.856311 | orchestrator | Monday 09 March 2026 00:48:40 +0000 (0:00:00.199) 0:00:18.680 ********** 2026-03-09 00:49:31.856316 | orchestrator | 2026-03-09 00:49:31.856321 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-09 00:49:31.856326 | orchestrator | Monday 09 March 2026 00:48:40 +0000 (0:00:00.262) 0:00:18.942 ********** 2026-03-09 00:49:31.856331 | orchestrator | 2026-03-09 00:49:31.856336 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-09 00:49:31.856340 | orchestrator | Monday 09 March 2026 00:48:41 +0000 (0:00:00.346) 0:00:19.288 ********** 2026-03-09 00:49:31.856345 | orchestrator | 2026-03-09 00:49:31.856350 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-09 00:49:31.856378 | orchestrator | Monday 09 March 2026 00:48:41 +0000 (0:00:00.259) 0:00:19.548 ********** 2026-03-09 00:49:31.856384 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:49:31.856393 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:49:31.856398 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:49:31.856405 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:49:31.856410 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:49:31.856415 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:49:31.856420 | orchestrator | 2026-03-09 00:49:31.856425 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-09 00:49:31.856429 | orchestrator | Monday 09 March 2026 00:48:51 +0000 (0:00:10.218) 0:00:29.766 ********** 2026-03-09 00:49:31.856434 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:49:31.856439 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:49:31.856444 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:49:31.856448 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:49:31.856453 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:49:31.856458 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:49:31.856463 | orchestrator | 2026-03-09 00:49:31.856468 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-09 00:49:31.856472 | orchestrator | Monday 09 March 2026 00:48:53 +0000 (0:00:02.228) 0:00:31.995 ********** 2026-03-09 00:49:31.856477 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:49:31.856482 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:49:31.856487 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:49:31.856491 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:49:31.856496 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:49:31.856501 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:49:31.856505 | orchestrator | 2026-03-09 00:49:31.856510 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-09 00:49:31.856515 | orchestrator | Monday 09 March 2026 00:49:04 +0000 (0:00:10.493) 0:00:42.488 ********** 2026-03-09 00:49:31.856520 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-09 00:49:31.856525 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-09 00:49:31.856530 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-09 00:49:31.856535 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-09 00:49:31.856540 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-09 00:49:31.856548 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-09 00:49:31.856553 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-09 00:49:31.856557 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-09 00:49:31.856562 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-09 00:49:31.856567 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-09 00:49:31.856572 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-09 00:49:31.856577 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-09 00:49:31.856581 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-09 00:49:31.856586 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-09 00:49:31.856591 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-09 00:49:31.856599 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-09 00:49:31.856604 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-09 00:49:31.856609 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-09 00:49:31.856614 | orchestrator | 2026-03-09 00:49:31.856619 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-09 00:49:31.856624 | orchestrator | Monday 09 March 2026 00:49:13 +0000 (0:00:09.123) 0:00:51.612 ********** 2026-03-09 00:49:31.856629 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-09 00:49:31.856634 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:49:31.856639 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-09 00:49:31.856643 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:49:31.856648 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-09 00:49:31.856653 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-09 00:49:31.856658 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:49:31.856663 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-09 00:49:31.856668 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-09 00:49:31.856673 | orchestrator | 2026-03-09 00:49:31.856677 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-09 00:49:31.856682 | orchestrator | Monday 09 March 2026 00:49:16 +0000 (0:00:02.910) 0:00:54.522 ********** 2026-03-09 00:49:31.856687 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-09 00:49:31.856692 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:49:31.856699 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-09 00:49:31.856704 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:49:31.856709 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-09 00:49:31.856714 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:49:31.856719 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-09 00:49:31.856724 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-09 00:49:31.856729 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-09 00:49:31.856734 | orchestrator | 2026-03-09 00:49:31.856739 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-09 00:49:31.856743 | orchestrator | Monday 09 March 2026 00:49:21 +0000 (0:00:04.550) 0:00:59.073 ********** 2026-03-09 00:49:31.856748 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:49:31.856753 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:49:31.856758 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:49:31.856762 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:49:31.856767 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:49:31.856772 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:49:31.856777 | orchestrator | 2026-03-09 00:49:31.856782 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:49:31.856787 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-09 00:49:31.856792 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-09 00:49:31.856797 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-09 00:49:31.856802 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-09 00:49:31.856807 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-09 00:49:31.856818 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-09 00:49:31.856823 | orchestrator | 2026-03-09 00:49:31.856828 | orchestrator | 2026-03-09 00:49:31.856833 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:49:31.856838 | orchestrator | Monday 09 March 2026 00:49:31 +0000 (0:00:10.135) 0:01:09.209 ********** 2026-03-09 00:49:31.856843 | orchestrator | =============================================================================== 2026-03-09 00:49:31.856847 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 20.63s 2026-03-09 00:49:31.856852 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.22s 2026-03-09 00:49:31.856857 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 9.12s 2026-03-09 00:49:31.856862 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.55s 2026-03-09 00:49:31.856867 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.25s 2026-03-09 00:49:31.856871 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.09s 2026-03-09 00:49:31.856876 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.91s 2026-03-09 00:49:31.856881 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.23s 2026-03-09 00:49:31.856886 | orchestrator | module-load : Load modules ---------------------------------------------- 1.64s 2026-03-09 00:49:31.856890 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.62s 2026-03-09 00:49:31.856895 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.53s 2026-03-09 00:49:31.856900 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.52s 2026-03-09 00:49:31.856905 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.34s 2026-03-09 00:49:31.856909 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.33s 2026-03-09 00:49:31.856914 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.20s 2026-03-09 00:49:31.856919 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.89s 2026-03-09 00:49:31.856924 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.81s 2026-03-09 00:49:31.856928 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.80s 2026-03-09 00:49:31.856933 | orchestrator | 2026-03-09 00:49:31 | INFO  | Task a4f75d62-8ce1-42a4-a1bc-548760b3a8e1 is in state SUCCESS 2026-03-09 00:49:31.856938 | orchestrator | 2026-03-09 00:49:31 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:49:31.858589 | orchestrator | 2026-03-09 00:49:31 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:49:31.859571 | orchestrator | 2026-03-09 00:49:31 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:49:31.862255 | orchestrator | 2026-03-09 00:49:31 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:49:31.862301 | orchestrator | 2026-03-09 00:49:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:34.898394 | orchestrator | 2026-03-09 00:49:34 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:49:34.898955 | orchestrator | 2026-03-09 00:49:34 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:49:34.899462 | orchestrator | 2026-03-09 00:49:34 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:49:34.900584 | orchestrator | 2026-03-09 00:49:34 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:49:34.903404 | orchestrator | 2026-03-09 00:49:34 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:49:34.903457 | orchestrator | 2026-03-09 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:37.939246 | orchestrator | 2026-03-09 00:49:37 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:49:37.941017 | orchestrator | 2026-03-09 00:49:37 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:49:37.944875 | orchestrator | 2026-03-09 00:49:37 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:49:37.946005 | orchestrator | 2026-03-09 00:49:37 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:49:37.948028 | orchestrator | 2026-03-09 00:49:37 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:49:37.948095 | orchestrator | 2026-03-09 00:49:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:40.990309 | orchestrator | 2026-03-09 00:49:40 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:49:40.990478 | orchestrator | 2026-03-09 00:49:40 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:49:40.990709 | orchestrator | 2026-03-09 00:49:40 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:49:40.992525 | orchestrator | 2026-03-09 00:49:40 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:49:40.993397 | orchestrator | 2026-03-09 00:49:40 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:49:40.993435 | orchestrator | 2026-03-09 00:49:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:44.032750 | orchestrator | 2026-03-09 00:49:44 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:49:44.033311 | orchestrator | 2026-03-09 00:49:44 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:49:44.034668 | orchestrator | 2026-03-09 00:49:44 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:49:44.035533 | orchestrator | 2026-03-09 00:49:44 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:49:44.037014 | orchestrator | 2026-03-09 00:49:44 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:49:44.037058 | orchestrator | 2026-03-09 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:47.069114 | orchestrator | 2026-03-09 00:49:47 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:49:47.069221 | orchestrator | 2026-03-09 00:49:47 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:49:47.070597 | orchestrator | 2026-03-09 00:49:47 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:49:47.071300 | orchestrator | 2026-03-09 00:49:47 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:49:47.072605 | orchestrator | 2026-03-09 00:49:47 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:49:47.072651 | orchestrator | 2026-03-09 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:50.128566 | orchestrator | 2026-03-09 00:49:50 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:49:50.129618 | orchestrator | 2026-03-09 00:49:50 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:49:50.131604 | orchestrator | 2026-03-09 00:49:50 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:49:50.133797 | orchestrator | 2026-03-09 00:49:50 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:49:50.135029 | orchestrator | 2026-03-09 00:49:50 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:49:50.135063 | orchestrator | 2026-03-09 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:53.181558 | orchestrator | 2026-03-09 00:49:53 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:49:53.182168 | orchestrator | 2026-03-09 00:49:53 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:49:53.186481 | orchestrator | 2026-03-09 00:49:53 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:49:53.186550 | orchestrator | 2026-03-09 00:49:53 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:49:53.190778 | orchestrator | 2026-03-09 00:49:53 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:49:53.190834 | orchestrator | 2026-03-09 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:56.224178 | orchestrator | 2026-03-09 00:49:56 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:49:56.226714 | orchestrator | 2026-03-09 00:49:56 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:49:56.228815 | orchestrator | 2026-03-09 00:49:56 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:49:56.229975 | orchestrator | 2026-03-09 00:49:56 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:49:56.231213 | orchestrator | 2026-03-09 00:49:56 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:49:56.231485 | orchestrator | 2026-03-09 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:49:59.266083 | orchestrator | 2026-03-09 00:49:59 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:49:59.266426 | orchestrator | 2026-03-09 00:49:59 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:49:59.267206 | orchestrator | 2026-03-09 00:49:59 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:49:59.268211 | orchestrator | 2026-03-09 00:49:59 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:49:59.270530 | orchestrator | 2026-03-09 00:49:59 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:49:59.270594 | orchestrator | 2026-03-09 00:49:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:02.339236 | orchestrator | 2026-03-09 00:50:02 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:50:02.339512 | orchestrator | 2026-03-09 00:50:02 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:50:02.346182 | orchestrator | 2026-03-09 00:50:02 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:50:02.348933 | orchestrator | 2026-03-09 00:50:02 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:50:02.353708 | orchestrator | 2026-03-09 00:50:02 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:50:02.353765 | orchestrator | 2026-03-09 00:50:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:05.393035 | orchestrator | 2026-03-09 00:50:05 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:50:05.394541 | orchestrator | 2026-03-09 00:50:05 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:50:05.396069 | orchestrator | 2026-03-09 00:50:05 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:50:05.397735 | orchestrator | 2026-03-09 00:50:05 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:50:05.399375 | orchestrator | 2026-03-09 00:50:05 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:50:05.399408 | orchestrator | 2026-03-09 00:50:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:08.449428 | orchestrator | 2026-03-09 00:50:08 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:50:08.450921 | orchestrator | 2026-03-09 00:50:08 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:50:08.453926 | orchestrator | 2026-03-09 00:50:08 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:50:08.454810 | orchestrator | 2026-03-09 00:50:08 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:50:08.456230 | orchestrator | 2026-03-09 00:50:08 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:50:08.456358 | orchestrator | 2026-03-09 00:50:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:11.511584 | orchestrator | 2026-03-09 00:50:11 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:50:11.512726 | orchestrator | 2026-03-09 00:50:11 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:50:11.514688 | orchestrator | 2026-03-09 00:50:11 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:50:11.517523 | orchestrator | 2026-03-09 00:50:11 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:50:11.518286 | orchestrator | 2026-03-09 00:50:11 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:50:11.518313 | orchestrator | 2026-03-09 00:50:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:14.570508 | orchestrator | 2026-03-09 00:50:14 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:50:14.570620 | orchestrator | 2026-03-09 00:50:14 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:50:14.571486 | orchestrator | 2026-03-09 00:50:14 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:50:14.572361 | orchestrator | 2026-03-09 00:50:14 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:50:14.573306 | orchestrator | 2026-03-09 00:50:14 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:50:14.573438 | orchestrator | 2026-03-09 00:50:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:17.635930 | orchestrator | 2026-03-09 00:50:17 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:50:17.636565 | orchestrator | 2026-03-09 00:50:17 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:50:17.637420 | orchestrator | 2026-03-09 00:50:17 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:50:17.638357 | orchestrator | 2026-03-09 00:50:17 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:50:17.639074 | orchestrator | 2026-03-09 00:50:17 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:50:17.639120 | orchestrator | 2026-03-09 00:50:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:20.686588 | orchestrator | 2026-03-09 00:50:20 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:50:20.689503 | orchestrator | 2026-03-09 00:50:20 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:50:20.693282 | orchestrator | 2026-03-09 00:50:20 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:50:20.697529 | orchestrator | 2026-03-09 00:50:20 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:50:20.699885 | orchestrator | 2026-03-09 00:50:20 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:50:20.700710 | orchestrator | 2026-03-09 00:50:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:23.762943 | orchestrator | 2026-03-09 00:50:23 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:50:23.764469 | orchestrator | 2026-03-09 00:50:23 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:50:23.764505 | orchestrator | 2026-03-09 00:50:23 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:50:23.764510 | orchestrator | 2026-03-09 00:50:23 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:50:23.767268 | orchestrator | 2026-03-09 00:50:23 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:50:23.767334 | orchestrator | 2026-03-09 00:50:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:26.858005 | orchestrator | 2026-03-09 00:50:26 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:50:26.862228 | orchestrator | 2026-03-09 00:50:26 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:50:26.865026 | orchestrator | 2026-03-09 00:50:26 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:50:26.867794 | orchestrator | 2026-03-09 00:50:26 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:50:26.870909 | orchestrator | 2026-03-09 00:50:26 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:50:26.871430 | orchestrator | 2026-03-09 00:50:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:29.902106 | orchestrator | 2026-03-09 00:50:29 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:50:29.904921 | orchestrator | 2026-03-09 00:50:29 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:50:29.906417 | orchestrator | 2026-03-09 00:50:29 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:50:29.908351 | orchestrator | 2026-03-09 00:50:29 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:50:29.911769 | orchestrator | 2026-03-09 00:50:29 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:50:29.911822 | orchestrator | 2026-03-09 00:50:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:32.977847 | orchestrator | 2026-03-09 00:50:32 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:50:32.979347 | orchestrator | 2026-03-09 00:50:32 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:50:32.981353 | orchestrator | 2026-03-09 00:50:32 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:50:32.985548 | orchestrator | 2026-03-09 00:50:32 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:50:32.986939 | orchestrator | 2026-03-09 00:50:32 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:50:32.986982 | orchestrator | 2026-03-09 00:50:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:36.308086 | orchestrator | 2026-03-09 00:50:36 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:50:36.309990 | orchestrator | 2026-03-09 00:50:36 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:50:36.311270 | orchestrator | 2026-03-09 00:50:36 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:50:36.313067 | orchestrator | 2026-03-09 00:50:36 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:50:36.314074 | orchestrator | 2026-03-09 00:50:36 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:50:36.314374 | orchestrator | 2026-03-09 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:39.362793 | orchestrator | 2026-03-09 00:50:39 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:50:39.362895 | orchestrator | 2026-03-09 00:50:39 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:50:39.362906 | orchestrator | 2026-03-09 00:50:39 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:50:39.362913 | orchestrator | 2026-03-09 00:50:39 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:50:39.362921 | orchestrator | 2026-03-09 00:50:39 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:50:39.362927 | orchestrator | 2026-03-09 00:50:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:42.411898 | orchestrator | 2026-03-09 00:50:42 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:50:42.413473 | orchestrator | 2026-03-09 00:50:42 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:50:42.415527 | orchestrator | 2026-03-09 00:50:42 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:50:42.417420 | orchestrator | 2026-03-09 00:50:42 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:50:42.419829 | orchestrator | 2026-03-09 00:50:42 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:50:42.419886 | orchestrator | 2026-03-09 00:50:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:45.646713 | orchestrator | 2026-03-09 00:50:45 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:50:45.646806 | orchestrator | 2026-03-09 00:50:45 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:50:45.646838 | orchestrator | 2026-03-09 00:50:45 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:50:45.646850 | orchestrator | 2026-03-09 00:50:45 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:50:45.646860 | orchestrator | 2026-03-09 00:50:45 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:50:45.646871 | orchestrator | 2026-03-09 00:50:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:48.595110 | orchestrator | 2026-03-09 00:50:48 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:50:48.595661 | orchestrator | 2026-03-09 00:50:48 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:50:48.596202 | orchestrator | 2026-03-09 00:50:48 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state STARTED 2026-03-09 00:50:48.598066 | orchestrator | 2026-03-09 00:50:48 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:50:48.599349 | orchestrator | 2026-03-09 00:50:48 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:50:48.599410 | orchestrator | 2026-03-09 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:51.649831 | orchestrator | 2026-03-09 00:50:51 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:50:51.651385 | orchestrator | 2026-03-09 00:50:51 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:50:51.653092 | orchestrator | 2026-03-09 00:50:51 | INFO  | Task 4e1e2c2a-31e6-49f6-9cbb-15d2ebd2a39d is in state SUCCESS 2026-03-09 00:50:51.654406 | orchestrator | 2026-03-09 00:50:51.654443 | orchestrator | 2026-03-09 00:50:51.654453 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-09 00:50:51.654463 | orchestrator | 2026-03-09 00:50:51.654472 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-09 00:50:51.654482 | orchestrator | Monday 09 March 2026 00:45:50 +0000 (0:00:00.225) 0:00:00.225 ********** 2026-03-09 00:50:51.654491 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:50:51.654501 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:50:51.654510 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:50:51.654518 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:50:51.654527 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:50:51.654535 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:50:51.654544 | orchestrator | 2026-03-09 00:50:51.654553 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-09 00:50:51.654561 | orchestrator | Monday 09 March 2026 00:45:51 +0000 (0:00:00.654) 0:00:00.879 ********** 2026-03-09 00:50:51.654570 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:50:51.654580 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:50:51.654588 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:50:51.654597 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.654606 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:51.654614 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:51.654623 | orchestrator | 2026-03-09 00:50:51.654631 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-09 00:50:51.654640 | orchestrator | Monday 09 March 2026 00:45:51 +0000 (0:00:00.497) 0:00:01.377 ********** 2026-03-09 00:50:51.654649 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:50:51.654657 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:50:51.654666 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:50:51.654682 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.654698 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:51.654707 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:51.654716 | orchestrator | 2026-03-09 00:50:51.654724 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-09 00:50:51.654733 | orchestrator | Monday 09 March 2026 00:45:52 +0000 (0:00:00.602) 0:00:01.979 ********** 2026-03-09 00:50:51.654742 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:50:51.654751 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:50:51.654760 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:51.654768 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:50:51.654777 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:50:51.654785 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:50:51.654794 | orchestrator | 2026-03-09 00:50:51.654803 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-09 00:50:51.654811 | orchestrator | Monday 09 March 2026 00:45:55 +0000 (0:00:02.786) 0:00:04.766 ********** 2026-03-09 00:50:51.654841 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:50:51.654851 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:50:51.654859 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:50:51.654868 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:51.654876 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:50:51.654885 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:50:51.654893 | orchestrator | 2026-03-09 00:50:51.654902 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-09 00:50:51.654911 | orchestrator | Monday 09 March 2026 00:45:56 +0000 (0:00:01.015) 0:00:05.782 ********** 2026-03-09 00:50:51.654958 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:50:51.654969 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:50:51.654977 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:51.654992 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:50:51.655006 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:50:51.655016 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:50:51.655026 | orchestrator | 2026-03-09 00:50:51.655037 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-09 00:50:51.655047 | orchestrator | Monday 09 March 2026 00:45:57 +0000 (0:00:01.653) 0:00:07.435 ********** 2026-03-09 00:50:51.655057 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:50:51.655067 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:50:51.655090 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:50:51.655101 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.655112 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:51.655122 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:51.655134 | orchestrator | 2026-03-09 00:50:51.655149 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-09 00:50:51.655163 | orchestrator | Monday 09 March 2026 00:45:58 +0000 (0:00:00.780) 0:00:08.215 ********** 2026-03-09 00:50:51.655176 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:50:51.655189 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:50:51.655203 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:50:51.655216 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.655231 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:51.655244 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:51.655257 | orchestrator | 2026-03-09 00:50:51.655270 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-09 00:50:51.655320 | orchestrator | Monday 09 March 2026 00:45:59 +0000 (0:00:01.146) 0:00:09.361 ********** 2026-03-09 00:50:51.655335 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 00:50:51.655351 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 00:50:51.655366 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:50:51.655381 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 00:50:51.655397 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 00:50:51.655412 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:50:51.655427 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 00:50:51.655442 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 00:50:51.655456 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:50:51.655471 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 00:50:51.655502 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 00:50:51.655518 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.655532 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 00:50:51.655543 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 00:50:51.655552 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:51.655572 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 00:50:51.655581 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 00:50:51.655589 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:51.655598 | orchestrator | 2026-03-09 00:50:51.655607 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-09 00:50:51.655616 | orchestrator | Monday 09 March 2026 00:46:00 +0000 (0:00:01.154) 0:00:10.516 ********** 2026-03-09 00:50:51.655625 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:50:51.655633 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:50:51.655642 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:50:51.655651 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:51.655659 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.655668 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:51.655676 | orchestrator | 2026-03-09 00:50:51.655685 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-09 00:50:51.655696 | orchestrator | Monday 09 March 2026 00:46:02 +0000 (0:00:01.689) 0:00:12.206 ********** 2026-03-09 00:50:51.655704 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:50:51.655713 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:50:51.655722 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:50:51.655731 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:50:51.655740 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:50:51.655748 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:50:51.655757 | orchestrator | 2026-03-09 00:50:51.655765 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-09 00:50:51.655774 | orchestrator | Monday 09 March 2026 00:46:03 +0000 (0:00:00.710) 0:00:12.916 ********** 2026-03-09 00:50:51.655783 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:51.655792 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:50:51.655800 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:50:51.655809 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:50:51.655818 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:50:51.655826 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:50:51.655835 | orchestrator | 2026-03-09 00:50:51.655844 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-09 00:50:51.655853 | orchestrator | Monday 09 March 2026 00:46:08 +0000 (0:00:05.053) 0:00:17.969 ********** 2026-03-09 00:50:51.655862 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:50:51.655871 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:50:51.655879 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:50:51.655888 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.655897 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:51.655947 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:51.655961 | orchestrator | 2026-03-09 00:50:51.655976 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-09 00:50:51.655989 | orchestrator | Monday 09 March 2026 00:46:10 +0000 (0:00:01.939) 0:00:19.909 ********** 2026-03-09 00:50:51.656003 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:50:51.656017 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:50:51.656031 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:50:51.656044 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.656057 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:51.656072 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:51.656085 | orchestrator | 2026-03-09 00:50:51.656098 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-09 00:50:51.656114 | orchestrator | Monday 09 March 2026 00:46:12 +0000 (0:00:02.422) 0:00:22.332 ********** 2026-03-09 00:50:51.656135 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:50:51.656148 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:50:51.656161 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:50:51.656175 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.656200 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:51.656215 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:51.656230 | orchestrator | 2026-03-09 00:50:51.656245 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-09 00:50:51.656259 | orchestrator | Monday 09 March 2026 00:46:14 +0000 (0:00:01.309) 0:00:23.642 ********** 2026-03-09 00:50:51.656272 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-09 00:50:51.656333 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-09 00:50:51.656343 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:50:51.656352 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-09 00:50:51.656361 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-09 00:50:51.656370 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-09 00:50:51.656379 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-09 00:50:51.656388 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-09 00:50:51.656396 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-09 00:50:51.656405 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:50:51.656414 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-09 00:50:51.656422 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-09 00:50:51.656431 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.656440 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:50:51.656449 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:51.656457 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-09 00:50:51.656466 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-09 00:50:51.656475 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:51.656483 | orchestrator | 2026-03-09 00:50:51.656492 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-09 00:50:51.656512 | orchestrator | Monday 09 March 2026 00:46:16 +0000 (0:00:02.317) 0:00:25.960 ********** 2026-03-09 00:50:51.656522 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:50:51.656531 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:50:51.656539 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:50:51.656548 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.656556 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:51.656565 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:51.656574 | orchestrator | 2026-03-09 00:50:51.656583 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-09 00:50:51.656591 | orchestrator | Monday 09 March 2026 00:46:17 +0000 (0:00:01.073) 0:00:27.033 ********** 2026-03-09 00:50:51.656600 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:50:51.656609 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:50:51.656618 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:50:51.656626 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.656635 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:51.656643 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:51.656652 | orchestrator | 2026-03-09 00:50:51.656661 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-09 00:50:51.656670 | orchestrator | 2026-03-09 00:50:51.656678 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-09 00:50:51.656687 | orchestrator | Monday 09 March 2026 00:46:18 +0000 (0:00:01.516) 0:00:28.549 ********** 2026-03-09 00:50:51.656696 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:50:51.656705 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:50:51.656714 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:50:51.656722 | orchestrator | 2026-03-09 00:50:51.656731 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-09 00:50:51.656740 | orchestrator | Monday 09 March 2026 00:46:20 +0000 (0:00:01.337) 0:00:29.887 ********** 2026-03-09 00:50:51.656748 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:50:51.656757 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:50:51.656773 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:50:51.656782 | orchestrator | 2026-03-09 00:50:51.656791 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-09 00:50:51.656800 | orchestrator | Monday 09 March 2026 00:46:21 +0000 (0:00:01.536) 0:00:31.423 ********** 2026-03-09 00:50:51.656809 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:50:51.656817 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:50:51.656826 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:50:51.656835 | orchestrator | 2026-03-09 00:50:51.656843 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-09 00:50:51.656852 | orchestrator | Monday 09 March 2026 00:46:22 +0000 (0:00:00.861) 0:00:32.284 ********** 2026-03-09 00:50:51.656861 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:50:51.656869 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:50:51.656878 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:50:51.656887 | orchestrator | 2026-03-09 00:50:51.656896 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-09 00:50:51.656904 | orchestrator | Monday 09 March 2026 00:46:23 +0000 (0:00:00.704) 0:00:32.989 ********** 2026-03-09 00:50:51.656913 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.656922 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:51.656931 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:51.656939 | orchestrator | 2026-03-09 00:50:51.656948 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-09 00:50:51.656957 | orchestrator | Monday 09 March 2026 00:46:23 +0000 (0:00:00.554) 0:00:33.543 ********** 2026-03-09 00:50:51.656966 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:51.656974 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:50:51.656983 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:50:51.656992 | orchestrator | 2026-03-09 00:50:51.657000 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-09 00:50:51.657009 | orchestrator | Monday 09 March 2026 00:46:24 +0000 (0:00:00.891) 0:00:34.435 ********** 2026-03-09 00:50:51.657018 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:50:51.657027 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:51.657035 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:50:51.657044 | orchestrator | 2026-03-09 00:50:51.657059 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-09 00:50:51.657069 | orchestrator | Monday 09 March 2026 00:46:27 +0000 (0:00:02.352) 0:00:36.787 ********** 2026-03-09 00:50:51.657078 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:50:51.657087 | orchestrator | 2026-03-09 00:50:51.657095 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-09 00:50:51.657104 | orchestrator | Monday 09 March 2026 00:46:27 +0000 (0:00:00.747) 0:00:37.534 ********** 2026-03-09 00:50:51.657156 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:50:51.657165 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:50:51.657174 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:50:51.657183 | orchestrator | 2026-03-09 00:50:51.657192 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-09 00:50:51.657201 | orchestrator | Monday 09 March 2026 00:46:30 +0000 (0:00:02.114) 0:00:39.648 ********** 2026-03-09 00:50:51.657210 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:51.657218 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:51.657230 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:51.657245 | orchestrator | 2026-03-09 00:50:51.657259 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-09 00:50:51.657326 | orchestrator | Monday 09 March 2026 00:46:30 +0000 (0:00:00.683) 0:00:40.331 ********** 2026-03-09 00:50:51.657345 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:51.657360 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:51.657374 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:51.657388 | orchestrator | 2026-03-09 00:50:51.657403 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-09 00:50:51.657429 | orchestrator | Monday 09 March 2026 00:46:31 +0000 (0:00:01.124) 0:00:41.456 ********** 2026-03-09 00:50:51.657444 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:51.657459 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:51.657475 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:51.657485 | orchestrator | 2026-03-09 00:50:51.657493 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-09 00:50:51.657511 | orchestrator | Monday 09 March 2026 00:46:33 +0000 (0:00:01.463) 0:00:42.919 ********** 2026-03-09 00:50:51.657521 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.657530 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:51.657538 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:51.657547 | orchestrator | 2026-03-09 00:50:51.657556 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-09 00:50:51.657564 | orchestrator | Monday 09 March 2026 00:46:34 +0000 (0:00:00.976) 0:00:43.895 ********** 2026-03-09 00:50:51.657573 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.657582 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:51.657591 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:51.657602 | orchestrator | 2026-03-09 00:50:51.657618 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-09 00:50:51.657627 | orchestrator | Monday 09 March 2026 00:46:34 +0000 (0:00:00.649) 0:00:44.545 ********** 2026-03-09 00:50:51.657636 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:51.657645 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:50:51.657654 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:50:51.657662 | orchestrator | 2026-03-09 00:50:51.657671 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-09 00:50:51.657680 | orchestrator | Monday 09 March 2026 00:46:36 +0000 (0:00:01.413) 0:00:45.959 ********** 2026-03-09 00:50:51.657689 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:50:51.657697 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:50:51.657706 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:50:51.657715 | orchestrator | 2026-03-09 00:50:51.657723 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-09 00:50:51.657732 | orchestrator | Monday 09 March 2026 00:46:39 +0000 (0:00:02.998) 0:00:48.957 ********** 2026-03-09 00:50:51.657741 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:50:51.657750 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:50:51.657758 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:50:51.657767 | orchestrator | 2026-03-09 00:50:51.657776 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-09 00:50:51.657785 | orchestrator | Monday 09 March 2026 00:46:40 +0000 (0:00:00.642) 0:00:49.599 ********** 2026-03-09 00:50:51.657794 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-09 00:50:51.657804 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-09 00:50:51.657813 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-09 00:50:51.657822 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-09 00:50:51.657831 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-09 00:50:51.657840 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-09 00:50:51.657848 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-09 00:50:51.657863 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-09 00:50:51.657878 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-09 00:50:51.657888 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-09 00:50:51.657897 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-09 00:50:51.657906 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-09 00:50:51.657915 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:50:51.657924 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:50:51.657932 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:50:51.657941 | orchestrator | 2026-03-09 00:50:51.657950 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-09 00:50:51.657959 | orchestrator | Monday 09 March 2026 00:47:23 +0000 (0:00:43.384) 0:01:32.984 ********** 2026-03-09 00:50:51.657968 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.657977 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:51.657985 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:51.657994 | orchestrator | 2026-03-09 00:50:51.658003 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-09 00:50:51.658061 | orchestrator | Monday 09 March 2026 00:47:23 +0000 (0:00:00.292) 0:01:33.276 ********** 2026-03-09 00:50:51.658074 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:51.658083 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:50:51.658098 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:50:51.658113 | orchestrator | 2026-03-09 00:50:51.658127 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-09 00:50:51.658143 | orchestrator | Monday 09 March 2026 00:47:24 +0000 (0:00:01.061) 0:01:34.338 ********** 2026-03-09 00:50:51.658156 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:51.658220 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:50:51.658239 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:50:51.658255 | orchestrator | 2026-03-09 00:50:51.658328 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-09 00:50:51.658342 | orchestrator | Monday 09 March 2026 00:47:26 +0000 (0:00:01.480) 0:01:35.819 ********** 2026-03-09 00:50:51.658351 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:51.658360 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:50:51.658369 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:50:51.658378 | orchestrator | 2026-03-09 00:50:51.658387 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-09 00:50:51.658396 | orchestrator | Monday 09 March 2026 00:48:04 +0000 (0:00:38.011) 0:02:13.830 ********** 2026-03-09 00:50:51.658405 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:50:51.658414 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:50:51.658423 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:50:51.658432 | orchestrator | 2026-03-09 00:50:51.658441 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-09 00:50:51.658449 | orchestrator | Monday 09 March 2026 00:48:05 +0000 (0:00:00.857) 0:02:14.688 ********** 2026-03-09 00:50:51.658457 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:50:51.658465 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:50:51.658473 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:50:51.658481 | orchestrator | 2026-03-09 00:50:51.658489 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-09 00:50:51.658498 | orchestrator | Monday 09 March 2026 00:48:05 +0000 (0:00:00.692) 0:02:15.380 ********** 2026-03-09 00:50:51.658506 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:51.658514 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:50:51.658539 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:50:51.658548 | orchestrator | 2026-03-09 00:50:51.658556 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-09 00:50:51.658564 | orchestrator | Monday 09 March 2026 00:48:06 +0000 (0:00:00.648) 0:02:16.029 ********** 2026-03-09 00:50:51.658573 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:50:51.658581 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:50:51.658589 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:50:51.658597 | orchestrator | 2026-03-09 00:50:51.658605 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-09 00:50:51.658613 | orchestrator | Monday 09 March 2026 00:48:07 +0000 (0:00:01.073) 0:02:17.102 ********** 2026-03-09 00:50:51.658621 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:50:51.658629 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:50:51.658637 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:50:51.658645 | orchestrator | 2026-03-09 00:50:51.658653 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-09 00:50:51.658661 | orchestrator | Monday 09 March 2026 00:48:07 +0000 (0:00:00.337) 0:02:17.439 ********** 2026-03-09 00:50:51.658669 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:51.658677 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:50:51.658685 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:50:51.658693 | orchestrator | 2026-03-09 00:50:51.658708 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-09 00:50:51.658719 | orchestrator | Monday 09 March 2026 00:48:08 +0000 (0:00:00.723) 0:02:18.162 ********** 2026-03-09 00:50:51.658727 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:51.658735 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:50:51.658743 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:50:51.658751 | orchestrator | 2026-03-09 00:50:51.658759 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-09 00:50:51.658767 | orchestrator | Monday 09 March 2026 00:48:09 +0000 (0:00:00.683) 0:02:18.846 ********** 2026-03-09 00:50:51.658775 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:51.658783 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:50:51.658791 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:50:51.658799 | orchestrator | 2026-03-09 00:50:51.658807 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-09 00:50:51.658815 | orchestrator | Monday 09 March 2026 00:48:10 +0000 (0:00:01.296) 0:02:20.143 ********** 2026-03-09 00:50:51.658824 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:50:51.658838 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:50:51.658852 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:50:51.658860 | orchestrator | 2026-03-09 00:50:51.658868 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-09 00:50:51.658876 | orchestrator | Monday 09 March 2026 00:48:11 +0000 (0:00:00.839) 0:02:20.983 ********** 2026-03-09 00:50:51.658884 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.658892 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:51.658900 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:51.658908 | orchestrator | 2026-03-09 00:50:51.658916 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-09 00:50:51.658924 | orchestrator | Monday 09 March 2026 00:48:11 +0000 (0:00:00.288) 0:02:21.271 ********** 2026-03-09 00:50:51.658938 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.658947 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:51.658954 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:51.658962 | orchestrator | 2026-03-09 00:50:51.658970 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-09 00:50:51.658979 | orchestrator | Monday 09 March 2026 00:48:11 +0000 (0:00:00.291) 0:02:21.563 ********** 2026-03-09 00:50:51.658987 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:50:51.658995 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:50:51.659003 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:50:51.659011 | orchestrator | 2026-03-09 00:50:51.659019 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-09 00:50:51.659034 | orchestrator | Monday 09 March 2026 00:48:12 +0000 (0:00:00.930) 0:02:22.494 ********** 2026-03-09 00:50:51.659042 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:50:51.659050 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:50:51.659058 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:50:51.659066 | orchestrator | 2026-03-09 00:50:51.659074 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-09 00:50:51.659082 | orchestrator | Monday 09 March 2026 00:48:13 +0000 (0:00:00.723) 0:02:23.217 ********** 2026-03-09 00:50:51.659090 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-09 00:50:51.659105 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-09 00:50:51.659114 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-09 00:50:51.659122 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-09 00:50:51.659130 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-09 00:50:51.659138 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-09 00:50:51.659146 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-09 00:50:51.659154 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-09 00:50:51.659162 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-09 00:50:51.659170 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-09 00:50:51.659181 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-09 00:50:51.659193 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-09 00:50:51.659201 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-09 00:50:51.659210 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-09 00:50:51.659218 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-09 00:50:51.659225 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-09 00:50:51.659233 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-09 00:50:51.659241 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-09 00:50:51.659249 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-09 00:50:51.659258 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-09 00:50:51.659266 | orchestrator | 2026-03-09 00:50:51.659294 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-09 00:50:51.659303 | orchestrator | 2026-03-09 00:50:51.659311 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-09 00:50:51.659319 | orchestrator | Monday 09 March 2026 00:48:17 +0000 (0:00:03.427) 0:02:26.644 ********** 2026-03-09 00:50:51.659327 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:50:51.659335 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:50:51.659343 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:50:51.659351 | orchestrator | 2026-03-09 00:50:51.659359 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-09 00:50:51.659367 | orchestrator | Monday 09 March 2026 00:48:17 +0000 (0:00:00.588) 0:02:27.233 ********** 2026-03-09 00:50:51.659381 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:50:51.659389 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:50:51.659397 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:50:51.659405 | orchestrator | 2026-03-09 00:50:51.659413 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-09 00:50:51.659421 | orchestrator | Monday 09 March 2026 00:48:18 +0000 (0:00:00.654) 0:02:27.887 ********** 2026-03-09 00:50:51.659428 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:50:51.659441 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:50:51.659449 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:50:51.659457 | orchestrator | 2026-03-09 00:50:51.659465 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-09 00:50:51.659473 | orchestrator | Monday 09 March 2026 00:48:18 +0000 (0:00:00.351) 0:02:28.238 ********** 2026-03-09 00:50:51.659481 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:50:51.659489 | orchestrator | 2026-03-09 00:50:51.659497 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-09 00:50:51.659505 | orchestrator | Monday 09 March 2026 00:48:19 +0000 (0:00:00.653) 0:02:28.891 ********** 2026-03-09 00:50:51.659513 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:50:51.659521 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:50:51.659528 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:50:51.659536 | orchestrator | 2026-03-09 00:50:51.659544 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-09 00:50:51.659552 | orchestrator | Monday 09 March 2026 00:48:19 +0000 (0:00:00.296) 0:02:29.188 ********** 2026-03-09 00:50:51.659560 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:50:51.659568 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:50:51.659576 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:50:51.659584 | orchestrator | 2026-03-09 00:50:51.659592 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-09 00:50:51.659600 | orchestrator | Monday 09 March 2026 00:48:19 +0000 (0:00:00.310) 0:02:29.499 ********** 2026-03-09 00:50:51.659608 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:50:51.659616 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:50:51.659624 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:50:51.659632 | orchestrator | 2026-03-09 00:50:51.659641 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-09 00:50:51.659649 | orchestrator | Monday 09 March 2026 00:48:20 +0000 (0:00:00.344) 0:02:29.843 ********** 2026-03-09 00:50:51.659657 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:50:51.659665 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:50:51.659674 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:50:51.659682 | orchestrator | 2026-03-09 00:50:51.659696 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-09 00:50:51.659704 | orchestrator | Monday 09 March 2026 00:48:20 +0000 (0:00:00.711) 0:02:30.555 ********** 2026-03-09 00:50:51.659713 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:50:51.659721 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:50:51.659729 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:50:51.659738 | orchestrator | 2026-03-09 00:50:51.659747 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-09 00:50:51.659755 | orchestrator | Monday 09 March 2026 00:48:22 +0000 (0:00:01.120) 0:02:31.675 ********** 2026-03-09 00:50:51.659763 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:50:51.659772 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:50:51.659780 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:50:51.659788 | orchestrator | 2026-03-09 00:50:51.659796 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-09 00:50:51.659805 | orchestrator | Monday 09 March 2026 00:48:23 +0000 (0:00:01.328) 0:02:33.004 ********** 2026-03-09 00:50:51.659813 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:50:51.659821 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:50:51.659829 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:50:51.659842 | orchestrator | 2026-03-09 00:50:51.659851 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-09 00:50:51.659859 | orchestrator | 2026-03-09 00:50:51.659867 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-09 00:50:51.659875 | orchestrator | Monday 09 March 2026 00:48:34 +0000 (0:00:11.073) 0:02:44.078 ********** 2026-03-09 00:50:51.659883 | orchestrator | ok: [testbed-manager] 2026-03-09 00:50:51.659891 | orchestrator | 2026-03-09 00:50:51.659899 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-09 00:50:51.659907 | orchestrator | Monday 09 March 2026 00:48:35 +0000 (0:00:00.950) 0:02:45.028 ********** 2026-03-09 00:50:51.659916 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:51.659923 | orchestrator | 2026-03-09 00:50:51.659931 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-09 00:50:51.659940 | orchestrator | Monday 09 March 2026 00:48:35 +0000 (0:00:00.499) 0:02:45.528 ********** 2026-03-09 00:50:51.659948 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-09 00:50:51.659956 | orchestrator | 2026-03-09 00:50:51.659964 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-09 00:50:51.659972 | orchestrator | Monday 09 March 2026 00:48:36 +0000 (0:00:00.646) 0:02:46.175 ********** 2026-03-09 00:50:51.659980 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:51.659988 | orchestrator | 2026-03-09 00:50:51.659996 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-09 00:50:51.660005 | orchestrator | Monday 09 March 2026 00:48:37 +0000 (0:00:00.913) 0:02:47.089 ********** 2026-03-09 00:50:51.660013 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:51.660021 | orchestrator | 2026-03-09 00:50:51.660030 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-09 00:50:51.660038 | orchestrator | Monday 09 March 2026 00:48:38 +0000 (0:00:00.626) 0:02:47.715 ********** 2026-03-09 00:50:51.660046 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-09 00:50:51.660054 | orchestrator | 2026-03-09 00:50:51.660062 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-09 00:50:51.660070 | orchestrator | Monday 09 March 2026 00:48:39 +0000 (0:00:01.573) 0:02:49.288 ********** 2026-03-09 00:50:51.660078 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-09 00:50:51.660086 | orchestrator | 2026-03-09 00:50:51.660095 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-09 00:50:51.660103 | orchestrator | Monday 09 March 2026 00:48:40 +0000 (0:00:00.812) 0:02:50.101 ********** 2026-03-09 00:50:51.660111 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:51.660119 | orchestrator | 2026-03-09 00:50:51.660127 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-09 00:50:51.660139 | orchestrator | Monday 09 March 2026 00:48:41 +0000 (0:00:00.531) 0:02:50.633 ********** 2026-03-09 00:50:51.660148 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:51.660156 | orchestrator | 2026-03-09 00:50:51.660164 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-09 00:50:51.660172 | orchestrator | 2026-03-09 00:50:51.660180 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-09 00:50:51.660188 | orchestrator | Monday 09 March 2026 00:48:41 +0000 (0:00:00.401) 0:02:51.034 ********** 2026-03-09 00:50:51.660196 | orchestrator | ok: [testbed-manager] 2026-03-09 00:50:51.660205 | orchestrator | 2026-03-09 00:50:51.660218 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-09 00:50:51.660228 | orchestrator | Monday 09 March 2026 00:48:41 +0000 (0:00:00.169) 0:02:51.203 ********** 2026-03-09 00:50:51.660236 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-09 00:50:51.660244 | orchestrator | 2026-03-09 00:50:51.660252 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-09 00:50:51.660260 | orchestrator | Monday 09 March 2026 00:48:41 +0000 (0:00:00.246) 0:02:51.450 ********** 2026-03-09 00:50:51.660292 | orchestrator | ok: [testbed-manager] 2026-03-09 00:50:51.660302 | orchestrator | 2026-03-09 00:50:51.660310 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-09 00:50:51.660318 | orchestrator | Monday 09 March 2026 00:48:42 +0000 (0:00:00.842) 0:02:52.292 ********** 2026-03-09 00:50:51.660327 | orchestrator | ok: [testbed-manager] 2026-03-09 00:50:51.660334 | orchestrator | 2026-03-09 00:50:51.660343 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-09 00:50:51.660351 | orchestrator | Monday 09 March 2026 00:48:44 +0000 (0:00:01.656) 0:02:53.948 ********** 2026-03-09 00:50:51.660359 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:51.660367 | orchestrator | 2026-03-09 00:50:51.660375 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-09 00:50:51.660383 | orchestrator | Monday 09 March 2026 00:48:45 +0000 (0:00:00.767) 0:02:54.716 ********** 2026-03-09 00:50:51.660391 | orchestrator | ok: [testbed-manager] 2026-03-09 00:50:51.660399 | orchestrator | 2026-03-09 00:50:51.660412 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-09 00:50:51.660421 | orchestrator | Monday 09 March 2026 00:48:45 +0000 (0:00:00.564) 0:02:55.280 ********** 2026-03-09 00:50:51.660429 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:51.660448 | orchestrator | 2026-03-09 00:50:51.660457 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-09 00:50:51.660465 | orchestrator | Monday 09 March 2026 00:48:54 +0000 (0:00:08.306) 0:03:03.587 ********** 2026-03-09 00:50:51.660473 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:51.660481 | orchestrator | 2026-03-09 00:50:51.660489 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-09 00:50:51.660507 | orchestrator | Monday 09 March 2026 00:49:10 +0000 (0:00:16.037) 0:03:19.625 ********** 2026-03-09 00:50:51.660515 | orchestrator | ok: [testbed-manager] 2026-03-09 00:50:51.660523 | orchestrator | 2026-03-09 00:50:51.660532 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-09 00:50:51.660540 | orchestrator | 2026-03-09 00:50:51.660548 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-09 00:50:51.660557 | orchestrator | Monday 09 March 2026 00:49:10 +0000 (0:00:00.670) 0:03:20.296 ********** 2026-03-09 00:50:51.660565 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:50:51.660573 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:50:51.660581 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:50:51.660589 | orchestrator | 2026-03-09 00:50:51.660597 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-09 00:50:51.660605 | orchestrator | Monday 09 March 2026 00:49:11 +0000 (0:00:00.429) 0:03:20.725 ********** 2026-03-09 00:50:51.660613 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.660621 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:51.660629 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:51.660637 | orchestrator | 2026-03-09 00:50:51.660645 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-09 00:50:51.660653 | orchestrator | Monday 09 March 2026 00:49:11 +0000 (0:00:00.447) 0:03:21.173 ********** 2026-03-09 00:50:51.660661 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:50:51.660670 | orchestrator | 2026-03-09 00:50:51.660678 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-09 00:50:51.660686 | orchestrator | Monday 09 March 2026 00:49:12 +0000 (0:00:00.965) 0:03:22.139 ********** 2026-03-09 00:50:51.660694 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-09 00:50:51.660703 | orchestrator | 2026-03-09 00:50:51.660711 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-09 00:50:51.660719 | orchestrator | Monday 09 March 2026 00:49:13 +0000 (0:00:01.106) 0:03:23.245 ********** 2026-03-09 00:50:51.660727 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 00:50:51.660740 | orchestrator | 2026-03-09 00:50:51.660748 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-09 00:50:51.660757 | orchestrator | Monday 09 March 2026 00:49:14 +0000 (0:00:01.119) 0:03:24.365 ********** 2026-03-09 00:50:51.660765 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.660773 | orchestrator | 2026-03-09 00:50:51.660781 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-09 00:50:51.660789 | orchestrator | Monday 09 March 2026 00:49:14 +0000 (0:00:00.133) 0:03:24.498 ********** 2026-03-09 00:50:51.660797 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 00:50:51.660805 | orchestrator | 2026-03-09 00:50:51.660813 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-09 00:50:51.660821 | orchestrator | Monday 09 March 2026 00:49:16 +0000 (0:00:01.194) 0:03:25.693 ********** 2026-03-09 00:50:51.660829 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.660837 | orchestrator | 2026-03-09 00:50:51.660846 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-09 00:50:51.660858 | orchestrator | Monday 09 March 2026 00:49:16 +0000 (0:00:00.130) 0:03:25.824 ********** 2026-03-09 00:50:51.660867 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.660875 | orchestrator | 2026-03-09 00:50:51.660883 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-09 00:50:51.660891 | orchestrator | Monday 09 March 2026 00:49:16 +0000 (0:00:00.148) 0:03:25.973 ********** 2026-03-09 00:50:51.660899 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.660908 | orchestrator | 2026-03-09 00:50:51.660916 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-09 00:50:51.660924 | orchestrator | Monday 09 March 2026 00:49:16 +0000 (0:00:00.111) 0:03:26.084 ********** 2026-03-09 00:50:51.660932 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.660940 | orchestrator | 2026-03-09 00:50:51.660948 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-09 00:50:51.660956 | orchestrator | Monday 09 March 2026 00:49:16 +0000 (0:00:00.121) 0:03:26.206 ********** 2026-03-09 00:50:51.660964 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-09 00:50:51.660972 | orchestrator | 2026-03-09 00:50:51.660980 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-09 00:50:51.660989 | orchestrator | Monday 09 March 2026 00:49:22 +0000 (0:00:05.859) 0:03:32.065 ********** 2026-03-09 00:50:51.660997 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-09 00:50:51.661005 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-09 00:50:51.661013 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-09 00:50:51.661022 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-09 00:50:51.661029 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-09 00:50:51.661038 | orchestrator | 2026-03-09 00:50:51.661046 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-09 00:50:51.661054 | orchestrator | Monday 09 March 2026 00:50:15 +0000 (0:00:52.691) 0:04:24.757 ********** 2026-03-09 00:50:51.661067 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 00:50:51.661076 | orchestrator | 2026-03-09 00:50:51.661084 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-09 00:50:51.661093 | orchestrator | Monday 09 March 2026 00:50:16 +0000 (0:00:01.352) 0:04:26.109 ********** 2026-03-09 00:50:51.661101 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-09 00:50:51.661109 | orchestrator | 2026-03-09 00:50:51.661117 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-09 00:50:51.661125 | orchestrator | Monday 09 March 2026 00:50:18 +0000 (0:00:01.850) 0:04:27.960 ********** 2026-03-09 00:50:51.661133 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-09 00:50:51.661142 | orchestrator | 2026-03-09 00:50:51.661150 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-09 00:50:51.661173 | orchestrator | Monday 09 March 2026 00:50:19 +0000 (0:00:01.292) 0:04:29.253 ********** 2026-03-09 00:50:51.661188 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.661202 | orchestrator | 2026-03-09 00:50:51.661216 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-09 00:50:51.661228 | orchestrator | Monday 09 March 2026 00:50:19 +0000 (0:00:00.132) 0:04:29.386 ********** 2026-03-09 00:50:51.661242 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-09 00:50:51.661256 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-09 00:50:51.661272 | orchestrator | 2026-03-09 00:50:51.661454 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-09 00:50:51.661464 | orchestrator | Monday 09 March 2026 00:50:22 +0000 (0:00:02.742) 0:04:32.128 ********** 2026-03-09 00:50:51.661471 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.661478 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:51.661485 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:51.661492 | orchestrator | 2026-03-09 00:50:51.661515 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-09 00:50:51.661523 | orchestrator | Monday 09 March 2026 00:50:22 +0000 (0:00:00.329) 0:04:32.457 ********** 2026-03-09 00:50:51.661529 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:50:51.661537 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:50:51.661544 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:50:51.661550 | orchestrator | 2026-03-09 00:50:51.661568 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-09 00:50:51.661575 | orchestrator | 2026-03-09 00:50:51.661589 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-09 00:50:51.661596 | orchestrator | Monday 09 March 2026 00:50:24 +0000 (0:00:01.209) 0:04:33.667 ********** 2026-03-09 00:50:51.661603 | orchestrator | ok: [testbed-manager] 2026-03-09 00:50:51.661610 | orchestrator | 2026-03-09 00:50:51.661618 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-09 00:50:51.661633 | orchestrator | Monday 09 March 2026 00:50:24 +0000 (0:00:00.174) 0:04:33.842 ********** 2026-03-09 00:50:51.661640 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-09 00:50:51.661647 | orchestrator | 2026-03-09 00:50:51.661654 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-09 00:50:51.661661 | orchestrator | Monday 09 March 2026 00:50:24 +0000 (0:00:00.261) 0:04:34.103 ********** 2026-03-09 00:50:51.661668 | orchestrator | changed: [testbed-manager] 2026-03-09 00:50:51.661675 | orchestrator | 2026-03-09 00:50:51.661681 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-09 00:50:51.661688 | orchestrator | 2026-03-09 00:50:51.661695 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-09 00:50:51.661702 | orchestrator | Monday 09 March 2026 00:50:31 +0000 (0:00:06.621) 0:04:40.724 ********** 2026-03-09 00:50:51.661709 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:50:51.661715 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:50:51.661722 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:50:51.661729 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:50:51.661736 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:50:51.661743 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:50:51.661749 | orchestrator | 2026-03-09 00:50:51.661756 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-09 00:50:51.661763 | orchestrator | Monday 09 March 2026 00:50:32 +0000 (0:00:00.971) 0:04:41.696 ********** 2026-03-09 00:50:51.661770 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-09 00:50:51.662403 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-09 00:50:51.662429 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-09 00:50:51.662445 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-09 00:50:51.662452 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-09 00:50:51.662459 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-09 00:50:51.662466 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-09 00:50:51.662472 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-09 00:50:51.662479 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-09 00:50:51.662486 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-09 00:50:51.662493 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-09 00:50:51.662499 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-09 00:50:51.662519 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-09 00:50:51.662530 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-09 00:50:51.662537 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-09 00:50:51.662543 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-09 00:50:51.662550 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-09 00:50:51.662556 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-09 00:50:51.662563 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-09 00:50:51.662570 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-09 00:50:51.662577 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-09 00:50:51.662584 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-09 00:50:51.662590 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-09 00:50:51.662597 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-09 00:50:51.662604 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-09 00:50:51.662610 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-09 00:50:51.662617 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-09 00:50:51.662624 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-09 00:50:51.662630 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-09 00:50:51.662637 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-09 00:50:51.662644 | orchestrator | 2026-03-09 00:50:51.662651 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-09 00:50:51.662659 | orchestrator | Monday 09 March 2026 00:50:48 +0000 (0:00:16.297) 0:04:57.993 ********** 2026-03-09 00:50:51.662665 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:50:51.662672 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:50:51.662679 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:50:51.662686 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.662692 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:51.662700 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:51.662706 | orchestrator | 2026-03-09 00:50:51.662714 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-09 00:50:51.662725 | orchestrator | Monday 09 March 2026 00:50:49 +0000 (0:00:00.911) 0:04:58.904 ********** 2026-03-09 00:50:51.662732 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:50:51.662738 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:50:51.662745 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:50:51.662752 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:50:51.662759 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:50:51.662765 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:50:51.662772 | orchestrator | 2026-03-09 00:50:51.662779 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:50:51.662786 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:50:51.662794 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-09 00:50:51.662801 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-09 00:50:51.662808 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-09 00:50:51.662815 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-09 00:50:51.662821 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-09 00:50:51.662828 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-09 00:50:51.662835 | orchestrator | 2026-03-09 00:50:51.662842 | orchestrator | 2026-03-09 00:50:51.662849 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:50:51.662855 | orchestrator | Monday 09 March 2026 00:50:49 +0000 (0:00:00.542) 0:04:59.447 ********** 2026-03-09 00:50:51.662862 | orchestrator | =============================================================================== 2026-03-09 00:50:51.662869 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 52.69s 2026-03-09 00:50:51.662876 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.38s 2026-03-09 00:50:51.662883 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 38.01s 2026-03-09 00:50:51.662895 | orchestrator | Manage labels ---------------------------------------------------------- 16.30s 2026-03-09 00:50:51.662905 | orchestrator | kubectl : Install required packages ------------------------------------ 16.04s 2026-03-09 00:50:51.662912 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.07s 2026-03-09 00:50:51.662919 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.31s 2026-03-09 00:50:51.662926 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.62s 2026-03-09 00:50:51.662933 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.86s 2026-03-09 00:50:51.662939 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.05s 2026-03-09 00:50:51.662946 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.43s 2026-03-09 00:50:51.662953 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.00s 2026-03-09 00:50:51.662960 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.79s 2026-03-09 00:50:51.662967 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.74s 2026-03-09 00:50:51.662973 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.42s 2026-03-09 00:50:51.662984 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.35s 2026-03-09 00:50:51.662991 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.32s 2026-03-09 00:50:51.662998 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.11s 2026-03-09 00:50:51.663004 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 1.94s 2026-03-09 00:50:51.663011 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.85s 2026-03-09 00:50:51.663018 | orchestrator | 2026-03-09 00:50:51 | INFO  | Task 4d8ae435-5a3c-41d1-b380-087afa042ab1 is in state STARTED 2026-03-09 00:50:51.663025 | orchestrator | 2026-03-09 00:50:51 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:50:51.663032 | orchestrator | 2026-03-09 00:50:51 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:50:51.663039 | orchestrator | 2026-03-09 00:50:51 | INFO  | Task 0fb0f3a1-9406-4ebe-93a8-3721a7760b75 is in state STARTED 2026-03-09 00:50:51.663046 | orchestrator | 2026-03-09 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:54.721093 | orchestrator | 2026-03-09 00:50:54 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:50:54.723616 | orchestrator | 2026-03-09 00:50:54 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:50:54.726824 | orchestrator | 2026-03-09 00:50:54 | INFO  | Task 4d8ae435-5a3c-41d1-b380-087afa042ab1 is in state STARTED 2026-03-09 00:50:54.728173 | orchestrator | 2026-03-09 00:50:54 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:50:54.730841 | orchestrator | 2026-03-09 00:50:54 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:50:54.734270 | orchestrator | 2026-03-09 00:50:54 | INFO  | Task 0fb0f3a1-9406-4ebe-93a8-3721a7760b75 is in state STARTED 2026-03-09 00:50:54.734335 | orchestrator | 2026-03-09 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:50:57.784255 | orchestrator | 2026-03-09 00:50:57 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:50:57.784384 | orchestrator | 2026-03-09 00:50:57 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:50:57.784395 | orchestrator | 2026-03-09 00:50:57 | INFO  | Task 4d8ae435-5a3c-41d1-b380-087afa042ab1 is in state STARTED 2026-03-09 00:50:57.785649 | orchestrator | 2026-03-09 00:50:57 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:50:57.786205 | orchestrator | 2026-03-09 00:50:57 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:50:57.788121 | orchestrator | 2026-03-09 00:50:57 | INFO  | Task 0fb0f3a1-9406-4ebe-93a8-3721a7760b75 is in state STARTED 2026-03-09 00:50:57.788154 | orchestrator | 2026-03-09 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:00.842429 | orchestrator | 2026-03-09 00:51:00 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:51:00.843400 | orchestrator | 2026-03-09 00:51:00 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:51:00.844456 | orchestrator | 2026-03-09 00:51:00 | INFO  | Task 4d8ae435-5a3c-41d1-b380-087afa042ab1 is in state STARTED 2026-03-09 00:51:00.845440 | orchestrator | 2026-03-09 00:51:00 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:51:00.846846 | orchestrator | 2026-03-09 00:51:00 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:51:00.847758 | orchestrator | 2026-03-09 00:51:00 | INFO  | Task 0fb0f3a1-9406-4ebe-93a8-3721a7760b75 is in state SUCCESS 2026-03-09 00:51:00.848042 | orchestrator | 2026-03-09 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:03.896921 | orchestrator | 2026-03-09 00:51:03 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:51:03.897185 | orchestrator | 2026-03-09 00:51:03 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:51:03.898574 | orchestrator | 2026-03-09 00:51:03 | INFO  | Task 4d8ae435-5a3c-41d1-b380-087afa042ab1 is in state STARTED 2026-03-09 00:51:03.899419 | orchestrator | 2026-03-09 00:51:03 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:51:03.900627 | orchestrator | 2026-03-09 00:51:03 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:51:03.900657 | orchestrator | 2026-03-09 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:06.935682 | orchestrator | 2026-03-09 00:51:06 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:51:06.936042 | orchestrator | 2026-03-09 00:51:06 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:51:06.937757 | orchestrator | 2026-03-09 00:51:06 | INFO  | Task 4d8ae435-5a3c-41d1-b380-087afa042ab1 is in state SUCCESS 2026-03-09 00:51:06.939608 | orchestrator | 2026-03-09 00:51:06 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:51:06.940434 | orchestrator | 2026-03-09 00:51:06 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:51:06.940559 | orchestrator | 2026-03-09 00:51:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:09.992642 | orchestrator | 2026-03-09 00:51:09 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:51:09.994592 | orchestrator | 2026-03-09 00:51:09 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:51:09.999462 | orchestrator | 2026-03-09 00:51:09 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:51:10.000078 | orchestrator | 2026-03-09 00:51:09 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:51:10.002241 | orchestrator | 2026-03-09 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:13.035431 | orchestrator | 2026-03-09 00:51:13 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:51:13.035515 | orchestrator | 2026-03-09 00:51:13 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:51:13.035522 | orchestrator | 2026-03-09 00:51:13 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:51:13.036052 | orchestrator | 2026-03-09 00:51:13 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:51:13.036083 | orchestrator | 2026-03-09 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:16.069166 | orchestrator | 2026-03-09 00:51:16 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:51:16.071428 | orchestrator | 2026-03-09 00:51:16 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:51:16.072684 | orchestrator | 2026-03-09 00:51:16 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:51:16.074551 | orchestrator | 2026-03-09 00:51:16 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:51:16.074600 | orchestrator | 2026-03-09 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:19.128509 | orchestrator | 2026-03-09 00:51:19 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:51:19.129295 | orchestrator | 2026-03-09 00:51:19 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:51:19.131425 | orchestrator | 2026-03-09 00:51:19 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:51:19.132032 | orchestrator | 2026-03-09 00:51:19 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:51:19.132080 | orchestrator | 2026-03-09 00:51:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:22.166163 | orchestrator | 2026-03-09 00:51:22 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:51:22.166815 | orchestrator | 2026-03-09 00:51:22 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:51:22.167476 | orchestrator | 2026-03-09 00:51:22 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:51:22.168503 | orchestrator | 2026-03-09 00:51:22 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state STARTED 2026-03-09 00:51:22.168537 | orchestrator | 2026-03-09 00:51:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:25.203011 | orchestrator | 2026-03-09 00:51:25 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:51:25.209481 | orchestrator | 2026-03-09 00:51:25 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:51:25.209566 | orchestrator | 2026-03-09 00:51:25 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:51:25.209578 | orchestrator | 2026-03-09 00:51:25 | INFO  | Task 1170d1f8-93b2-4065-82a0-dc17a4783f7d is in state SUCCESS 2026-03-09 00:51:25.209588 | orchestrator | 2026-03-09 00:51:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:25.210774 | orchestrator | 2026-03-09 00:51:25.210814 | orchestrator | 2026-03-09 00:51:25.210824 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-09 00:51:25.210834 | orchestrator | 2026-03-09 00:51:25.210843 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-09 00:51:25.210853 | orchestrator | Monday 09 March 2026 00:50:56 +0000 (0:00:00.207) 0:00:00.207 ********** 2026-03-09 00:51:25.210862 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-09 00:51:25.210872 | orchestrator | 2026-03-09 00:51:25.210880 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-09 00:51:25.210885 | orchestrator | Monday 09 March 2026 00:50:57 +0000 (0:00:00.793) 0:00:01.000 ********** 2026-03-09 00:51:25.210891 | orchestrator | changed: [testbed-manager] 2026-03-09 00:51:25.210896 | orchestrator | 2026-03-09 00:51:25.210902 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-09 00:51:25.210907 | orchestrator | Monday 09 March 2026 00:50:58 +0000 (0:00:01.532) 0:00:02.533 ********** 2026-03-09 00:51:25.210913 | orchestrator | changed: [testbed-manager] 2026-03-09 00:51:25.210918 | orchestrator | 2026-03-09 00:51:25.210923 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:51:25.210929 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:51:25.210936 | orchestrator | 2026-03-09 00:51:25.210941 | orchestrator | 2026-03-09 00:51:25.210946 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:51:25.210951 | orchestrator | Monday 09 March 2026 00:50:59 +0000 (0:00:00.352) 0:00:02.885 ********** 2026-03-09 00:51:25.210957 | orchestrator | =============================================================================== 2026-03-09 00:51:25.210962 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.53s 2026-03-09 00:51:25.210981 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.79s 2026-03-09 00:51:25.210986 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.35s 2026-03-09 00:51:25.210992 | orchestrator | 2026-03-09 00:51:25.210997 | orchestrator | 2026-03-09 00:51:25.211002 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-09 00:51:25.211007 | orchestrator | 2026-03-09 00:51:25.211012 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-09 00:51:25.211017 | orchestrator | Monday 09 March 2026 00:50:56 +0000 (0:00:00.176) 0:00:00.176 ********** 2026-03-09 00:51:25.211023 | orchestrator | ok: [testbed-manager] 2026-03-09 00:51:25.211029 | orchestrator | 2026-03-09 00:51:25.211034 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-09 00:51:25.211040 | orchestrator | Monday 09 March 2026 00:50:57 +0000 (0:00:00.609) 0:00:00.786 ********** 2026-03-09 00:51:25.211045 | orchestrator | ok: [testbed-manager] 2026-03-09 00:51:25.211050 | orchestrator | 2026-03-09 00:51:25.211055 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-09 00:51:25.211063 | orchestrator | Monday 09 March 2026 00:50:58 +0000 (0:00:00.868) 0:00:01.654 ********** 2026-03-09 00:51:25.211072 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-09 00:51:25.211080 | orchestrator | 2026-03-09 00:51:25.211088 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-09 00:51:25.211096 | orchestrator | Monday 09 March 2026 00:50:58 +0000 (0:00:00.716) 0:00:02.371 ********** 2026-03-09 00:51:25.211104 | orchestrator | changed: [testbed-manager] 2026-03-09 00:51:25.211112 | orchestrator | 2026-03-09 00:51:25.211120 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-09 00:51:25.211129 | orchestrator | Monday 09 March 2026 00:51:00 +0000 (0:00:01.668) 0:00:04.040 ********** 2026-03-09 00:51:25.211137 | orchestrator | changed: [testbed-manager] 2026-03-09 00:51:25.211146 | orchestrator | 2026-03-09 00:51:25.211154 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-09 00:51:25.211163 | orchestrator | Monday 09 March 2026 00:51:01 +0000 (0:00:00.612) 0:00:04.652 ********** 2026-03-09 00:51:25.211172 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-09 00:51:25.211178 | orchestrator | 2026-03-09 00:51:25.211183 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-09 00:51:25.211188 | orchestrator | Monday 09 March 2026 00:51:03 +0000 (0:00:01.898) 0:00:06.551 ********** 2026-03-09 00:51:25.211194 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-09 00:51:25.211199 | orchestrator | 2026-03-09 00:51:25.211204 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-09 00:51:25.211215 | orchestrator | Monday 09 March 2026 00:51:04 +0000 (0:00:00.936) 0:00:07.488 ********** 2026-03-09 00:51:25.211221 | orchestrator | ok: [testbed-manager] 2026-03-09 00:51:25.211226 | orchestrator | 2026-03-09 00:51:25.211231 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-09 00:51:25.211236 | orchestrator | Monday 09 March 2026 00:51:04 +0000 (0:00:00.464) 0:00:07.953 ********** 2026-03-09 00:51:25.211241 | orchestrator | ok: [testbed-manager] 2026-03-09 00:51:25.211246 | orchestrator | 2026-03-09 00:51:25.211252 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:51:25.211279 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:51:25.211285 | orchestrator | 2026-03-09 00:51:25.211290 | orchestrator | 2026-03-09 00:51:25.211295 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:51:25.211300 | orchestrator | Monday 09 March 2026 00:51:04 +0000 (0:00:00.359) 0:00:08.313 ********** 2026-03-09 00:51:25.211305 | orchestrator | =============================================================================== 2026-03-09 00:51:25.211310 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.90s 2026-03-09 00:51:25.211320 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.67s 2026-03-09 00:51:25.211325 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.94s 2026-03-09 00:51:25.211340 | orchestrator | Create .kube directory -------------------------------------------------- 0.87s 2026-03-09 00:51:25.211345 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.72s 2026-03-09 00:51:25.211350 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.61s 2026-03-09 00:51:25.211356 | orchestrator | Get home directory of operator user ------------------------------------- 0.61s 2026-03-09 00:51:25.211362 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.47s 2026-03-09 00:51:25.211368 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.36s 2026-03-09 00:51:25.211374 | orchestrator | 2026-03-09 00:51:25.211380 | orchestrator | 2026-03-09 00:51:25.211386 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-09 00:51:25.211391 | orchestrator | 2026-03-09 00:51:25.211398 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-09 00:51:25.211404 | orchestrator | Monday 09 March 2026 00:48:44 +0000 (0:00:00.070) 0:00:00.070 ********** 2026-03-09 00:51:25.211410 | orchestrator | ok: [localhost] => { 2026-03-09 00:51:25.211417 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-09 00:51:25.211423 | orchestrator | } 2026-03-09 00:51:25.211429 | orchestrator | 2026-03-09 00:51:25.211435 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-09 00:51:25.211441 | orchestrator | Monday 09 March 2026 00:48:44 +0000 (0:00:00.052) 0:00:00.122 ********** 2026-03-09 00:51:25.211448 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-09 00:51:25.211456 | orchestrator | ...ignoring 2026-03-09 00:51:25.211462 | orchestrator | 2026-03-09 00:51:25.211468 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-09 00:51:25.211474 | orchestrator | Monday 09 March 2026 00:48:47 +0000 (0:00:03.018) 0:00:03.141 ********** 2026-03-09 00:51:25.211479 | orchestrator | skipping: [localhost] 2026-03-09 00:51:25.211484 | orchestrator | 2026-03-09 00:51:25.211489 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-09 00:51:25.211495 | orchestrator | Monday 09 March 2026 00:48:47 +0000 (0:00:00.046) 0:00:03.188 ********** 2026-03-09 00:51:25.211500 | orchestrator | ok: [localhost] 2026-03-09 00:51:25.211505 | orchestrator | 2026-03-09 00:51:25.211510 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:51:25.211515 | orchestrator | 2026-03-09 00:51:25.211521 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 00:51:25.211526 | orchestrator | Monday 09 March 2026 00:48:48 +0000 (0:00:00.289) 0:00:03.478 ********** 2026-03-09 00:51:25.211531 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:51:25.211536 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:51:25.211541 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:51:25.211547 | orchestrator | 2026-03-09 00:51:25.211552 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 00:51:25.211558 | orchestrator | Monday 09 March 2026 00:48:48 +0000 (0:00:00.328) 0:00:03.806 ********** 2026-03-09 00:51:25.211567 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-09 00:51:25.211575 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-09 00:51:25.211583 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-09 00:51:25.211591 | orchestrator | 2026-03-09 00:51:25.211598 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-09 00:51:25.211607 | orchestrator | 2026-03-09 00:51:25.211614 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-09 00:51:25.211628 | orchestrator | Monday 09 March 2026 00:48:49 +0000 (0:00:00.967) 0:00:04.774 ********** 2026-03-09 00:51:25.211638 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:51:25.211646 | orchestrator | 2026-03-09 00:51:25.211655 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-09 00:51:25.211663 | orchestrator | Monday 09 March 2026 00:48:50 +0000 (0:00:00.578) 0:00:05.352 ********** 2026-03-09 00:51:25.211672 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:51:25.211678 | orchestrator | 2026-03-09 00:51:25.211683 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-09 00:51:25.211688 | orchestrator | Monday 09 March 2026 00:48:51 +0000 (0:00:01.076) 0:00:06.428 ********** 2026-03-09 00:51:25.211693 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:51:25.211698 | orchestrator | 2026-03-09 00:51:25.211707 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-09 00:51:25.211712 | orchestrator | Monday 09 March 2026 00:48:51 +0000 (0:00:00.513) 0:00:06.942 ********** 2026-03-09 00:51:25.211717 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:51:25.211722 | orchestrator | 2026-03-09 00:51:25.211728 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-09 00:51:25.211733 | orchestrator | Monday 09 March 2026 00:48:53 +0000 (0:00:01.608) 0:00:08.550 ********** 2026-03-09 00:51:25.211738 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:51:25.211743 | orchestrator | 2026-03-09 00:51:25.211748 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-09 00:51:25.211753 | orchestrator | Monday 09 March 2026 00:48:53 +0000 (0:00:00.574) 0:00:09.125 ********** 2026-03-09 00:51:25.211758 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:51:25.211763 | orchestrator | 2026-03-09 00:51:25.211769 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-09 00:51:25.211774 | orchestrator | Monday 09 March 2026 00:48:57 +0000 (0:00:03.182) 0:00:12.307 ********** 2026-03-09 00:51:25.211779 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:51:25.211784 | orchestrator | 2026-03-09 00:51:25.211789 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-09 00:51:25.211798 | orchestrator | Monday 09 March 2026 00:48:58 +0000 (0:00:00.995) 0:00:13.303 ********** 2026-03-09 00:51:25.211804 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:51:25.211809 | orchestrator | 2026-03-09 00:51:25.211814 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-09 00:51:25.211819 | orchestrator | Monday 09 March 2026 00:48:59 +0000 (0:00:01.460) 0:00:14.764 ********** 2026-03-09 00:51:25.211824 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:51:25.211829 | orchestrator | 2026-03-09 00:51:25.211834 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-09 00:51:25.211840 | orchestrator | Monday 09 March 2026 00:48:59 +0000 (0:00:00.429) 0:00:15.194 ********** 2026-03-09 00:51:25.211845 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:51:25.211850 | orchestrator | 2026-03-09 00:51:25.211855 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-09 00:51:25.211860 | orchestrator | Monday 09 March 2026 00:49:00 +0000 (0:00:00.413) 0:00:15.608 ********** 2026-03-09 00:51:25.211870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:51:25.211882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:51:25.211892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:51:25.211897 | orchestrator | 2026-03-09 00:51:25.211903 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-09 00:51:25.211908 | orchestrator | Monday 09 March 2026 00:49:01 +0000 (0:00:01.493) 0:00:17.101 ********** 2026-03-09 00:51:25.211918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:51:25.211924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:51:25.211933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:51:25.211939 | orchestrator | 2026-03-09 00:51:25.211944 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-09 00:51:25.211951 | orchestrator | Monday 09 March 2026 00:49:04 +0000 (0:00:02.541) 0:00:19.642 ********** 2026-03-09 00:51:25.211957 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-09 00:51:25.211962 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-09 00:51:25.211968 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-09 00:51:25.211973 | orchestrator | 2026-03-09 00:51:25.211978 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-09 00:51:25.211983 | orchestrator | Monday 09 March 2026 00:49:07 +0000 (0:00:02.901) 0:00:22.544 ********** 2026-03-09 00:51:25.211989 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-09 00:51:25.211994 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-09 00:51:25.211999 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-09 00:51:25.212004 | orchestrator | 2026-03-09 00:51:25.212009 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-09 00:51:25.212018 | orchestrator | Monday 09 March 2026 00:49:10 +0000 (0:00:02.832) 0:00:25.377 ********** 2026-03-09 00:51:25.212024 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-09 00:51:25.212032 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-09 00:51:25.212039 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-09 00:51:25.212047 | orchestrator | 2026-03-09 00:51:25.212055 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-09 00:51:25.212064 | orchestrator | Monday 09 March 2026 00:49:11 +0000 (0:00:01.827) 0:00:27.205 ********** 2026-03-09 00:51:25.212077 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-09 00:51:25.212086 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-09 00:51:25.212091 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-09 00:51:25.212096 | orchestrator | 2026-03-09 00:51:25.212102 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-09 00:51:25.212107 | orchestrator | Monday 09 March 2026 00:49:14 +0000 (0:00:02.598) 0:00:29.804 ********** 2026-03-09 00:51:25.212112 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-09 00:51:25.212117 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-09 00:51:25.212122 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-09 00:51:25.212127 | orchestrator | 2026-03-09 00:51:25.212132 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-09 00:51:25.212137 | orchestrator | Monday 09 March 2026 00:49:17 +0000 (0:00:02.666) 0:00:32.470 ********** 2026-03-09 00:51:25.212142 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-09 00:51:25.212147 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-09 00:51:25.212152 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-09 00:51:25.212157 | orchestrator | 2026-03-09 00:51:25.212162 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-09 00:51:25.212167 | orchestrator | Monday 09 March 2026 00:49:18 +0000 (0:00:01.639) 0:00:34.109 ********** 2026-03-09 00:51:25.212173 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:51:25.212178 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:51:25.212183 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:51:25.212188 | orchestrator | 2026-03-09 00:51:25.212193 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-09 00:51:25.212198 | orchestrator | Monday 09 March 2026 00:49:20 +0000 (0:00:01.149) 0:00:35.258 ********** 2026-03-09 00:51:25.212206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:51:25.212216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:51:25.212229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:51:25.212234 | orchestrator | 2026-03-09 00:51:25.212239 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-09 00:51:25.212245 | orchestrator | Monday 09 March 2026 00:49:22 +0000 (0:00:02.179) 0:00:37.438 ********** 2026-03-09 00:51:25.212250 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:51:25.212272 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:51:25.212277 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:51:25.212282 | orchestrator | 2026-03-09 00:51:25.212288 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-09 00:51:25.212293 | orchestrator | Monday 09 March 2026 00:49:23 +0000 (0:00:01.523) 0:00:38.962 ********** 2026-03-09 00:51:25.212298 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:51:25.212303 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:51:25.212308 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:51:25.212313 | orchestrator | 2026-03-09 00:51:25.212319 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-09 00:51:25.212324 | orchestrator | Monday 09 March 2026 00:49:35 +0000 (0:00:12.000) 0:00:50.963 ********** 2026-03-09 00:51:25.212329 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:51:25.212334 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:51:25.212339 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:51:25.212344 | orchestrator | 2026-03-09 00:51:25.212350 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-09 00:51:25.212355 | orchestrator | 2026-03-09 00:51:25.212360 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-09 00:51:25.212365 | orchestrator | Monday 09 March 2026 00:49:36 +0000 (0:00:00.415) 0:00:51.379 ********** 2026-03-09 00:51:25.212370 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:51:25.212375 | orchestrator | 2026-03-09 00:51:25.212381 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-09 00:51:25.212386 | orchestrator | Monday 09 March 2026 00:49:36 +0000 (0:00:00.632) 0:00:52.011 ********** 2026-03-09 00:51:25.212391 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:51:25.212396 | orchestrator | 2026-03-09 00:51:25.212401 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-09 00:51:25.212406 | orchestrator | Monday 09 March 2026 00:49:37 +0000 (0:00:00.268) 0:00:52.279 ********** 2026-03-09 00:51:25.212414 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:51:25.212423 | orchestrator | 2026-03-09 00:51:25.212431 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-09 00:51:25.212445 | orchestrator | Monday 09 March 2026 00:49:39 +0000 (0:00:01.945) 0:00:54.225 ********** 2026-03-09 00:51:25.212453 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:51:25.212462 | orchestrator | 2026-03-09 00:51:25.212471 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-09 00:51:25.212480 | orchestrator | 2026-03-09 00:51:25.212488 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-09 00:51:25.212502 | orchestrator | Monday 09 March 2026 00:50:37 +0000 (0:00:58.609) 0:01:52.834 ********** 2026-03-09 00:51:25.212508 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:51:25.212513 | orchestrator | 2026-03-09 00:51:25.212518 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-09 00:51:25.212523 | orchestrator | Monday 09 March 2026 00:50:38 +0000 (0:00:00.692) 0:01:53.527 ********** 2026-03-09 00:51:25.212528 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:51:25.212533 | orchestrator | 2026-03-09 00:51:25.212538 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-09 00:51:25.212544 | orchestrator | Monday 09 March 2026 00:50:38 +0000 (0:00:00.226) 0:01:53.754 ********** 2026-03-09 00:51:25.212549 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:51:25.212554 | orchestrator | 2026-03-09 00:51:25.212559 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-09 00:51:25.212564 | orchestrator | Monday 09 March 2026 00:50:41 +0000 (0:00:03.035) 0:01:56.789 ********** 2026-03-09 00:51:25.212569 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:51:25.212574 | orchestrator | 2026-03-09 00:51:25.212579 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-09 00:51:25.212584 | orchestrator | 2026-03-09 00:51:25.212590 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-09 00:51:25.212595 | orchestrator | Monday 09 March 2026 00:50:59 +0000 (0:00:17.698) 0:02:14.487 ********** 2026-03-09 00:51:25.212600 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:51:25.212605 | orchestrator | 2026-03-09 00:51:25.212615 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-09 00:51:25.212624 | orchestrator | Monday 09 March 2026 00:51:00 +0000 (0:00:00.765) 0:02:15.253 ********** 2026-03-09 00:51:25.212632 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:51:25.212641 | orchestrator | 2026-03-09 00:51:25.212649 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-09 00:51:25.212658 | orchestrator | Monday 09 March 2026 00:51:00 +0000 (0:00:00.498) 0:02:15.751 ********** 2026-03-09 00:51:25.212667 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:51:25.212675 | orchestrator | 2026-03-09 00:51:25.212684 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-09 00:51:25.212692 | orchestrator | Monday 09 March 2026 00:51:02 +0000 (0:00:01.837) 0:02:17.589 ********** 2026-03-09 00:51:25.212697 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:51:25.212704 | orchestrator | 2026-03-09 00:51:25.212712 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-09 00:51:25.212720 | orchestrator | 2026-03-09 00:51:25.212729 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-09 00:51:25.212737 | orchestrator | Monday 09 March 2026 00:51:18 +0000 (0:00:16.006) 0:02:33.595 ********** 2026-03-09 00:51:25.212746 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:51:25.212751 | orchestrator | 2026-03-09 00:51:25.212756 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-09 00:51:25.212762 | orchestrator | Monday 09 March 2026 00:51:19 +0000 (0:00:00.809) 0:02:34.405 ********** 2026-03-09 00:51:25.212767 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:51:25.212772 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:51:25.212777 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:51:25.212782 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-09 00:51:25.212787 | orchestrator | enable_outward_rabbitmq_True 2026-03-09 00:51:25.212800 | orchestrator | 2026-03-09 00:51:25.212805 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-09 00:51:25.212810 | orchestrator | skipping: no hosts matched 2026-03-09 00:51:25.212815 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-09 00:51:25.212820 | orchestrator | outward_rabbitmq_restart 2026-03-09 00:51:25.212825 | orchestrator | 2026-03-09 00:51:25.212830 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-09 00:51:25.212836 | orchestrator | skipping: no hosts matched 2026-03-09 00:51:25.212841 | orchestrator | 2026-03-09 00:51:25.212846 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-09 00:51:25.212851 | orchestrator | skipping: no hosts matched 2026-03-09 00:51:25.212856 | orchestrator | 2026-03-09 00:51:25.212861 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:51:25.212866 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-09 00:51:25.212872 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-09 00:51:25.212877 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:51:25.212882 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 00:51:25.212888 | orchestrator | 2026-03-09 00:51:25.212893 | orchestrator | 2026-03-09 00:51:25.212898 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:51:25.212903 | orchestrator | Monday 09 March 2026 00:51:22 +0000 (0:00:03.387) 0:02:37.792 ********** 2026-03-09 00:51:25.212908 | orchestrator | =============================================================================== 2026-03-09 00:51:25.212913 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 92.31s 2026-03-09 00:51:25.212918 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------ 12.00s 2026-03-09 00:51:25.212923 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 6.82s 2026-03-09 00:51:25.212928 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.39s 2026-03-09 00:51:25.212933 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 3.18s 2026-03-09 00:51:25.212941 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.02s 2026-03-09 00:51:25.212947 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.90s 2026-03-09 00:51:25.212952 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.83s 2026-03-09 00:51:25.212957 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.67s 2026-03-09 00:51:25.212962 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.60s 2026-03-09 00:51:25.212967 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.54s 2026-03-09 00:51:25.212972 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.18s 2026-03-09 00:51:25.212977 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.09s 2026-03-09 00:51:25.212982 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.83s 2026-03-09 00:51:25.212987 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.64s 2026-03-09 00:51:25.212992 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 1.61s 2026-03-09 00:51:25.212997 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.52s 2026-03-09 00:51:25.213006 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.49s 2026-03-09 00:51:25.213011 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.46s 2026-03-09 00:51:25.213020 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.15s 2026-03-09 00:51:28.286716 | orchestrator | 2026-03-09 00:51:28 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:51:28.287885 | orchestrator | 2026-03-09 00:51:28 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:51:28.288101 | orchestrator | 2026-03-09 00:51:28 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:51:28.288210 | orchestrator | 2026-03-09 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:31.318611 | orchestrator | 2026-03-09 00:51:31 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:51:31.321217 | orchestrator | 2026-03-09 00:51:31 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:51:31.323315 | orchestrator | 2026-03-09 00:51:31 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:51:31.323366 | orchestrator | 2026-03-09 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:34.347873 | orchestrator | 2026-03-09 00:51:34 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:51:34.348713 | orchestrator | 2026-03-09 00:51:34 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:51:34.349230 | orchestrator | 2026-03-09 00:51:34 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:51:34.349271 | orchestrator | 2026-03-09 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:37.377185 | orchestrator | 2026-03-09 00:51:37 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:51:37.378291 | orchestrator | 2026-03-09 00:51:37 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:51:37.379423 | orchestrator | 2026-03-09 00:51:37 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:51:37.379472 | orchestrator | 2026-03-09 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:40.427969 | orchestrator | 2026-03-09 00:51:40 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:51:40.431165 | orchestrator | 2026-03-09 00:51:40 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:51:40.432337 | orchestrator | 2026-03-09 00:51:40 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:51:40.432400 | orchestrator | 2026-03-09 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:43.469066 | orchestrator | 2026-03-09 00:51:43 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:51:43.470180 | orchestrator | 2026-03-09 00:51:43 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:51:43.471723 | orchestrator | 2026-03-09 00:51:43 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:51:43.471771 | orchestrator | 2026-03-09 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:46.529048 | orchestrator | 2026-03-09 00:51:46 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:51:46.529907 | orchestrator | 2026-03-09 00:51:46 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:51:46.531206 | orchestrator | 2026-03-09 00:51:46 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:51:46.531268 | orchestrator | 2026-03-09 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:49.646300 | orchestrator | 2026-03-09 00:51:49 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:51:49.648795 | orchestrator | 2026-03-09 00:51:49 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:51:49.650627 | orchestrator | 2026-03-09 00:51:49 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:51:49.650930 | orchestrator | 2026-03-09 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:52.696285 | orchestrator | 2026-03-09 00:51:52 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:51:52.699972 | orchestrator | 2026-03-09 00:51:52 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:51:52.703929 | orchestrator | 2026-03-09 00:51:52 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:51:52.704004 | orchestrator | 2026-03-09 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:55.750548 | orchestrator | 2026-03-09 00:51:55 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:51:55.750655 | orchestrator | 2026-03-09 00:51:55 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:51:55.750678 | orchestrator | 2026-03-09 00:51:55 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:51:55.750730 | orchestrator | 2026-03-09 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:51:58.789408 | orchestrator | 2026-03-09 00:51:58 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:51:58.794470 | orchestrator | 2026-03-09 00:51:58 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:51:58.796276 | orchestrator | 2026-03-09 00:51:58 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:51:58.796360 | orchestrator | 2026-03-09 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:01.851020 | orchestrator | 2026-03-09 00:52:01 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:52:01.853985 | orchestrator | 2026-03-09 00:52:01 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:52:01.856645 | orchestrator | 2026-03-09 00:52:01 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:52:01.856719 | orchestrator | 2026-03-09 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:04.901663 | orchestrator | 2026-03-09 00:52:04 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:52:04.902976 | orchestrator | 2026-03-09 00:52:04 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:52:04.905176 | orchestrator | 2026-03-09 00:52:04 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state STARTED 2026-03-09 00:52:04.905267 | orchestrator | 2026-03-09 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:07.957793 | orchestrator | 2026-03-09 00:52:07 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:52:07.958307 | orchestrator | 2026-03-09 00:52:07 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:52:07.960572 | orchestrator | 2026-03-09 00:52:07 | INFO  | Task 1b232584-6aad-42d3-be40-de44de7a537d is in state SUCCESS 2026-03-09 00:52:07.961358 | orchestrator | 2026-03-09 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:07.963341 | orchestrator | 2026-03-09 00:52:07.963367 | orchestrator | 2026-03-09 00:52:07.963379 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:52:07.963390 | orchestrator | 2026-03-09 00:52:07.963401 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 00:52:07.963412 | orchestrator | Monday 09 March 2026 00:49:37 +0000 (0:00:00.203) 0:00:00.203 ********** 2026-03-09 00:52:07.963423 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:52:07.963434 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:52:07.963444 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:52:07.963454 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:07.963465 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:07.963475 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:07.963485 | orchestrator | 2026-03-09 00:52:07.963496 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 00:52:07.963507 | orchestrator | Monday 09 March 2026 00:49:38 +0000 (0:00:00.843) 0:00:01.046 ********** 2026-03-09 00:52:07.963529 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-09 00:52:07.963540 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-09 00:52:07.963551 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-09 00:52:07.963561 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-09 00:52:07.963572 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-09 00:52:07.963582 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-09 00:52:07.963626 | orchestrator | 2026-03-09 00:52:07.963637 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-09 00:52:07.963648 | orchestrator | 2026-03-09 00:52:07.963686 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-09 00:52:07.963697 | orchestrator | Monday 09 March 2026 00:49:39 +0000 (0:00:01.044) 0:00:02.091 ********** 2026-03-09 00:52:07.963709 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:52:07.963721 | orchestrator | 2026-03-09 00:52:07.963781 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-09 00:52:07.963793 | orchestrator | Monday 09 March 2026 00:49:41 +0000 (0:00:02.077) 0:00:04.168 ********** 2026-03-09 00:52:07.963807 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.963821 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.963832 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.963844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.963866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.963877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.963887 | orchestrator | 2026-03-09 00:52:07.963908 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-09 00:52:07.963919 | orchestrator | Monday 09 March 2026 00:49:42 +0000 (0:00:01.585) 0:00:05.753 ********** 2026-03-09 00:52:07.963930 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.963941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.963952 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.963964 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.963976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.964051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.964072 | orchestrator | 2026-03-09 00:52:07.964084 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-09 00:52:07.964104 | orchestrator | Monday 09 March 2026 00:49:44 +0000 (0:00:01.593) 0:00:07.347 ********** 2026-03-09 00:52:07.964117 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.964129 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.964150 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.964162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.964180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.964193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.964206 | orchestrator | 2026-03-09 00:52:07.964236 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-09 00:52:07.964247 | orchestrator | Monday 09 March 2026 00:49:45 +0000 (0:00:01.534) 0:00:08.881 ********** 2026-03-09 00:52:07.964257 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.964268 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.964287 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.964298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.964309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.964320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.964330 | orchestrator | 2026-03-09 00:52:07.964346 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-09 00:52:07.964357 | orchestrator | Monday 09 March 2026 00:49:47 +0000 (0:00:01.684) 0:00:10.566 ********** 2026-03-09 00:52:07.964368 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.964382 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.964393 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.964405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.964415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.964432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.964443 | orchestrator | 2026-03-09 00:52:07.964454 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-09 00:52:07.964464 | orchestrator | Monday 09 March 2026 00:49:49 +0000 (0:00:01.762) 0:00:12.329 ********** 2026-03-09 00:52:07.964475 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:52:07.964487 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:07.964497 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:52:07.964507 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:07.964518 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:52:07.964528 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:07.964539 | orchestrator | 2026-03-09 00:52:07.964549 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-09 00:52:07.964559 | orchestrator | Monday 09 March 2026 00:49:52 +0000 (0:00:02.726) 0:00:15.055 ********** 2026-03-09 00:52:07.964570 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-09 00:52:07.964581 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-09 00:52:07.964591 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-09 00:52:07.964602 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-09 00:52:07.964612 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-09 00:52:07.964623 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-09 00:52:07.964633 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-09 00:52:07.964644 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-09 00:52:07.964660 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-09 00:52:07.964670 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-09 00:52:07.964680 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-09 00:52:07.964692 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-09 00:52:07.964703 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-09 00:52:07.964715 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-09 00:52:07.964730 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-09 00:52:07.964741 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-09 00:52:07.964752 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-09 00:52:07.964763 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-09 00:52:07.964779 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-09 00:52:07.964791 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-09 00:52:07.964802 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-09 00:52:07.964812 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-09 00:52:07.964823 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-09 00:52:07.964833 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-09 00:52:07.964844 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-09 00:52:07.964855 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-09 00:52:07.964865 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-09 00:52:07.964876 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-09 00:52:07.964887 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-09 00:52:07.964897 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-09 00:52:07.964908 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-09 00:52:07.964919 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-09 00:52:07.964929 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-09 00:52:07.964940 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-09 00:52:07.964950 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-09 00:52:07.964961 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-09 00:52:07.964972 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-09 00:52:07.964982 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-09 00:52:07.964993 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-09 00:52:07.965004 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-09 00:52:07.965014 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-09 00:52:07.965024 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-09 00:52:07.965034 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-09 00:52:07.965045 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-09 00:52:07.965060 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-09 00:52:07.965071 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-09 00:52:07.965081 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-09 00:52:07.965098 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-09 00:52:07.965108 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-09 00:52:07.965123 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-09 00:52:07.965134 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-09 00:52:07.965144 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-09 00:52:07.965154 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-09 00:52:07.965165 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-09 00:52:07.965175 | orchestrator | 2026-03-09 00:52:07.965186 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-09 00:52:07.965197 | orchestrator | Monday 09 March 2026 00:50:13 +0000 (0:00:21.413) 0:00:36.469 ********** 2026-03-09 00:52:07.965207 | orchestrator | 2026-03-09 00:52:07.965231 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-09 00:52:07.965242 | orchestrator | Monday 09 March 2026 00:50:13 +0000 (0:00:00.178) 0:00:36.647 ********** 2026-03-09 00:52:07.965252 | orchestrator | 2026-03-09 00:52:07.965262 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-09 00:52:07.965273 | orchestrator | Monday 09 March 2026 00:50:13 +0000 (0:00:00.183) 0:00:36.831 ********** 2026-03-09 00:52:07.965283 | orchestrator | 2026-03-09 00:52:07.965293 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-09 00:52:07.965304 | orchestrator | Monday 09 March 2026 00:50:14 +0000 (0:00:00.121) 0:00:36.952 ********** 2026-03-09 00:52:07.965314 | orchestrator | 2026-03-09 00:52:07.965325 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-09 00:52:07.965335 | orchestrator | Monday 09 March 2026 00:50:14 +0000 (0:00:00.148) 0:00:37.100 ********** 2026-03-09 00:52:07.965345 | orchestrator | 2026-03-09 00:52:07.965356 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-09 00:52:07.965366 | orchestrator | Monday 09 March 2026 00:50:14 +0000 (0:00:00.142) 0:00:37.243 ********** 2026-03-09 00:52:07.965376 | orchestrator | 2026-03-09 00:52:07.965387 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-09 00:52:07.965397 | orchestrator | Monday 09 March 2026 00:50:14 +0000 (0:00:00.128) 0:00:37.371 ********** 2026-03-09 00:52:07.965407 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:52:07.965418 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:07.965428 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:52:07.965438 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:52:07.965448 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:07.965459 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:07.965469 | orchestrator | 2026-03-09 00:52:07.965480 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-09 00:52:07.965490 | orchestrator | Monday 09 March 2026 00:50:16 +0000 (0:00:02.463) 0:00:39.835 ********** 2026-03-09 00:52:07.965501 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:07.965511 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:52:07.965520 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:07.965529 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:52:07.965538 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:52:07.965547 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:07.965556 | orchestrator | 2026-03-09 00:52:07.965565 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-09 00:52:07.965574 | orchestrator | 2026-03-09 00:52:07.965584 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-09 00:52:07.965598 | orchestrator | Monday 09 March 2026 00:50:50 +0000 (0:00:33.682) 0:01:13.518 ********** 2026-03-09 00:52:07.965608 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:52:07.965617 | orchestrator | 2026-03-09 00:52:07.965626 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-09 00:52:07.965635 | orchestrator | Monday 09 March 2026 00:50:52 +0000 (0:00:01.457) 0:01:14.975 ********** 2026-03-09 00:52:07.965645 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:52:07.965654 | orchestrator | 2026-03-09 00:52:07.965663 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-09 00:52:07.965672 | orchestrator | Monday 09 March 2026 00:50:53 +0000 (0:00:01.721) 0:01:16.696 ********** 2026-03-09 00:52:07.965680 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:07.965689 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:07.965698 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:07.965707 | orchestrator | 2026-03-09 00:52:07.965716 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-09 00:52:07.965725 | orchestrator | Monday 09 March 2026 00:50:55 +0000 (0:00:02.139) 0:01:18.835 ********** 2026-03-09 00:52:07.965734 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:07.965743 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:07.965752 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:07.965766 | orchestrator | 2026-03-09 00:52:07.965776 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-09 00:52:07.965785 | orchestrator | Monday 09 March 2026 00:50:56 +0000 (0:00:01.025) 0:01:19.861 ********** 2026-03-09 00:52:07.965794 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:07.965803 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:07.965812 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:07.965821 | orchestrator | 2026-03-09 00:52:07.965830 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-09 00:52:07.965839 | orchestrator | Monday 09 March 2026 00:50:57 +0000 (0:00:00.404) 0:01:20.266 ********** 2026-03-09 00:52:07.965848 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:07.965857 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:07.965866 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:07.965875 | orchestrator | 2026-03-09 00:52:07.965884 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-09 00:52:07.965897 | orchestrator | Monday 09 March 2026 00:50:57 +0000 (0:00:00.504) 0:01:20.771 ********** 2026-03-09 00:52:07.965906 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:07.965915 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:07.965924 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:07.965933 | orchestrator | 2026-03-09 00:52:07.965941 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-09 00:52:07.965950 | orchestrator | Monday 09 March 2026 00:50:58 +0000 (0:00:00.811) 0:01:21.582 ********** 2026-03-09 00:52:07.965959 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:07.965969 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:07.965978 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:07.965987 | orchestrator | 2026-03-09 00:52:07.965996 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-09 00:52:07.966005 | orchestrator | Monday 09 March 2026 00:50:59 +0000 (0:00:00.729) 0:01:22.312 ********** 2026-03-09 00:52:07.966274 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:07.966298 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:07.966308 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:07.966317 | orchestrator | 2026-03-09 00:52:07.966327 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-09 00:52:07.966335 | orchestrator | Monday 09 March 2026 00:50:59 +0000 (0:00:00.493) 0:01:22.806 ********** 2026-03-09 00:52:07.966344 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:07.966361 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:07.966370 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:07.966379 | orchestrator | 2026-03-09 00:52:07.966388 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-09 00:52:07.966398 | orchestrator | Monday 09 March 2026 00:51:00 +0000 (0:00:00.499) 0:01:23.306 ********** 2026-03-09 00:52:07.966407 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:07.966416 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:07.966424 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:07.966433 | orchestrator | 2026-03-09 00:52:07.966442 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-09 00:52:07.966452 | orchestrator | Monday 09 March 2026 00:51:01 +0000 (0:00:00.690) 0:01:23.996 ********** 2026-03-09 00:52:07.966461 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:07.966469 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:07.966479 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:07.966488 | orchestrator | 2026-03-09 00:52:07.966497 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-09 00:52:07.966506 | orchestrator | Monday 09 March 2026 00:51:01 +0000 (0:00:00.367) 0:01:24.363 ********** 2026-03-09 00:52:07.966515 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:07.966524 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:07.966533 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:07.966542 | orchestrator | 2026-03-09 00:52:07.966550 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-09 00:52:07.966560 | orchestrator | Monday 09 March 2026 00:51:01 +0000 (0:00:00.324) 0:01:24.688 ********** 2026-03-09 00:52:07.966569 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:07.966578 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:07.966587 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:07.966596 | orchestrator | 2026-03-09 00:52:07.966605 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-09 00:52:07.966614 | orchestrator | Monday 09 March 2026 00:51:02 +0000 (0:00:00.331) 0:01:25.020 ********** 2026-03-09 00:52:07.966622 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:07.966630 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:07.966639 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:07.966648 | orchestrator | 2026-03-09 00:52:07.966657 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-09 00:52:07.966665 | orchestrator | Monday 09 March 2026 00:51:02 +0000 (0:00:00.679) 0:01:25.699 ********** 2026-03-09 00:52:07.966674 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:07.966683 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:07.966692 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:07.966701 | orchestrator | 2026-03-09 00:52:07.966710 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-09 00:52:07.966719 | orchestrator | Monday 09 March 2026 00:51:03 +0000 (0:00:00.362) 0:01:26.061 ********** 2026-03-09 00:52:07.966728 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:07.966737 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:07.966745 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:07.966753 | orchestrator | 2026-03-09 00:52:07.966761 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-09 00:52:07.966769 | orchestrator | Monday 09 March 2026 00:51:03 +0000 (0:00:00.313) 0:01:26.375 ********** 2026-03-09 00:52:07.966777 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:07.966786 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:07.966793 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:07.966801 | orchestrator | 2026-03-09 00:52:07.966808 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-09 00:52:07.966816 | orchestrator | Monday 09 March 2026 00:51:03 +0000 (0:00:00.343) 0:01:26.718 ********** 2026-03-09 00:52:07.966825 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:07.966843 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:07.966860 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:07.966870 | orchestrator | 2026-03-09 00:52:07.966880 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-09 00:52:07.966889 | orchestrator | Monday 09 March 2026 00:51:04 +0000 (0:00:00.299) 0:01:27.018 ********** 2026-03-09 00:52:07.966899 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:52:07.966908 | orchestrator | 2026-03-09 00:52:07.966918 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-09 00:52:07.966927 | orchestrator | Monday 09 March 2026 00:51:04 +0000 (0:00:00.889) 0:01:27.907 ********** 2026-03-09 00:52:07.966937 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:07.966947 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:07.966957 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:07.966966 | orchestrator | 2026-03-09 00:52:07.966985 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-09 00:52:07.966992 | orchestrator | Monday 09 March 2026 00:51:05 +0000 (0:00:00.476) 0:01:28.383 ********** 2026-03-09 00:52:07.966997 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:07.967003 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:07.967008 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:07.967013 | orchestrator | 2026-03-09 00:52:07.967019 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-09 00:52:07.967024 | orchestrator | Monday 09 March 2026 00:51:05 +0000 (0:00:00.519) 0:01:28.904 ********** 2026-03-09 00:52:07.967030 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:07.967035 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:07.967040 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:07.967046 | orchestrator | 2026-03-09 00:52:07.967051 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-09 00:52:07.967057 | orchestrator | Monday 09 March 2026 00:51:06 +0000 (0:00:00.643) 0:01:29.548 ********** 2026-03-09 00:52:07.967062 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:07.967067 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:07.967073 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:07.967078 | orchestrator | 2026-03-09 00:52:07.967084 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-09 00:52:07.967089 | orchestrator | Monday 09 March 2026 00:51:06 +0000 (0:00:00.353) 0:01:29.901 ********** 2026-03-09 00:52:07.967095 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:07.967100 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:07.967105 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:07.967111 | orchestrator | 2026-03-09 00:52:07.967116 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-09 00:52:07.967122 | orchestrator | Monday 09 March 2026 00:51:07 +0000 (0:00:00.353) 0:01:30.254 ********** 2026-03-09 00:52:07.967127 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:07.967133 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:07.967138 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:07.967143 | orchestrator | 2026-03-09 00:52:07.967149 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-09 00:52:07.967154 | orchestrator | Monday 09 March 2026 00:51:07 +0000 (0:00:00.372) 0:01:30.627 ********** 2026-03-09 00:52:07.967160 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:07.967165 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:07.967170 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:07.967180 | orchestrator | 2026-03-09 00:52:07.967189 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-09 00:52:07.967198 | orchestrator | Monday 09 March 2026 00:51:08 +0000 (0:00:00.879) 0:01:31.506 ********** 2026-03-09 00:52:07.967207 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:07.967231 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:07.967241 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:07.967257 | orchestrator | 2026-03-09 00:52:07.967267 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-09 00:52:07.967276 | orchestrator | Monday 09 March 2026 00:51:09 +0000 (0:00:00.824) 0:01:32.330 ********** 2026-03-09 00:52:07.967287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967359 | orchestrator | 2026-03-09 00:52:07.967368 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-09 00:52:07.967374 | orchestrator | Monday 09 March 2026 00:51:11 +0000 (0:00:01.936) 0:01:34.267 ********** 2026-03-09 00:52:07.967380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967437 | orchestrator | 2026-03-09 00:52:07.967442 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-09 00:52:07.967451 | orchestrator | Monday 09 March 2026 00:51:15 +0000 (0:00:04.181) 0:01:38.449 ********** 2026-03-09 00:52:07.967457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.967547 | orchestrator | 2026-03-09 00:52:07.967555 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-09 00:52:07.967571 | orchestrator | Monday 09 March 2026 00:51:18 +0000 (0:00:02.538) 0:01:40.987 ********** 2026-03-09 00:52:07.967581 | orchestrator | 2026-03-09 00:52:07.967591 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-09 00:52:07.967600 | orchestrator | Monday 09 March 2026 00:51:18 +0000 (0:00:00.129) 0:01:41.117 ********** 2026-03-09 00:52:07.967608 | orchestrator | 2026-03-09 00:52:07.967614 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-09 00:52:07.967620 | orchestrator | Monday 09 March 2026 00:51:18 +0000 (0:00:00.078) 0:01:41.195 ********** 2026-03-09 00:52:07.967629 | orchestrator | 2026-03-09 00:52:07.967638 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-09 00:52:07.967648 | orchestrator | Monday 09 March 2026 00:51:18 +0000 (0:00:00.082) 0:01:41.278 ********** 2026-03-09 00:52:07.967656 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:07.967665 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:07.967673 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:07.967681 | orchestrator | 2026-03-09 00:52:07.967691 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-09 00:52:07.967700 | orchestrator | Monday 09 March 2026 00:51:21 +0000 (0:00:02.859) 0:01:44.138 ********** 2026-03-09 00:52:07.967709 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:07.967718 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:07.967728 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:07.967737 | orchestrator | 2026-03-09 00:52:07.967746 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-09 00:52:07.967753 | orchestrator | Monday 09 March 2026 00:51:24 +0000 (0:00:03.179) 0:01:47.317 ********** 2026-03-09 00:52:07.967761 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:07.967770 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:07.967779 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:07.967788 | orchestrator | 2026-03-09 00:52:07.967797 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-09 00:52:07.967806 | orchestrator | Monday 09 March 2026 00:51:28 +0000 (0:00:04.026) 0:01:51.343 ********** 2026-03-09 00:52:07.967815 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:07.967824 | orchestrator | 2026-03-09 00:52:07.967833 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-09 00:52:07.967843 | orchestrator | Monday 09 March 2026 00:51:28 +0000 (0:00:00.110) 0:01:51.453 ********** 2026-03-09 00:52:07.967852 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:07.967859 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:07.967869 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:07.967878 | orchestrator | 2026-03-09 00:52:07.967887 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-09 00:52:07.967896 | orchestrator | Monday 09 March 2026 00:51:29 +0000 (0:00:00.768) 0:01:52.222 ********** 2026-03-09 00:52:07.967905 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:07.967914 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:07.967923 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:07.967933 | orchestrator | 2026-03-09 00:52:07.967942 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-09 00:52:07.967951 | orchestrator | Monday 09 March 2026 00:51:29 +0000 (0:00:00.676) 0:01:52.899 ********** 2026-03-09 00:52:07.967960 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:07.967969 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:07.967979 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:07.967988 | orchestrator | 2026-03-09 00:52:07.967997 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-09 00:52:07.968006 | orchestrator | Monday 09 March 2026 00:51:30 +0000 (0:00:00.721) 0:01:53.620 ********** 2026-03-09 00:52:07.968015 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:07.968024 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:07.968034 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:07.968043 | orchestrator | 2026-03-09 00:52:07.968052 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-09 00:52:07.968068 | orchestrator | Monday 09 March 2026 00:51:31 +0000 (0:00:00.701) 0:01:54.321 ********** 2026-03-09 00:52:07.968077 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:07.968086 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:07.968100 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:07.968110 | orchestrator | 2026-03-09 00:52:07.968119 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-09 00:52:07.968128 | orchestrator | Monday 09 March 2026 00:51:32 +0000 (0:00:00.773) 0:01:55.094 ********** 2026-03-09 00:52:07.968137 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:07.968146 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:07.968155 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:07.968164 | orchestrator | 2026-03-09 00:52:07.968173 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-09 00:52:07.968181 | orchestrator | Monday 09 March 2026 00:51:32 +0000 (0:00:00.809) 0:01:55.904 ********** 2026-03-09 00:52:07.968189 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:07.968198 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:07.968206 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:07.968232 | orchestrator | 2026-03-09 00:52:07.968244 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-09 00:52:07.968252 | orchestrator | Monday 09 March 2026 00:51:33 +0000 (0:00:00.306) 0:01:56.210 ********** 2026-03-09 00:52:07.968261 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968269 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968278 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968287 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968296 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968305 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968313 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968326 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968339 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968347 | orchestrator | 2026-03-09 00:52:07.968355 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-09 00:52:07.968363 | orchestrator | Monday 09 March 2026 00:51:34 +0000 (0:00:01.554) 0:01:57.764 ********** 2026-03-09 00:52:07.968371 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968383 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968392 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968401 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968416 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968434 | orchestrator | 2026-03-09 00:52:07.968439 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-09 00:52:07.968444 | orchestrator | Monday 09 March 2026 00:51:38 +0000 (0:00:04.119) 0:02:01.884 ********** 2026-03-09 00:52:07.968452 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968458 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968465 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968480 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968499 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 00:52:07.968504 | orchestrator | 2026-03-09 00:52:07.968509 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-09 00:52:07.968514 | orchestrator | Monday 09 March 2026 00:51:42 +0000 (0:00:03.348) 0:02:05.233 ********** 2026-03-09 00:52:07.968518 | orchestrator | 2026-03-09 00:52:07.968523 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-09 00:52:07.968528 | orchestrator | Monday 09 March 2026 00:51:42 +0000 (0:00:00.106) 0:02:05.339 ********** 2026-03-09 00:52:07.968533 | orchestrator | 2026-03-09 00:52:07.968538 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-09 00:52:07.968542 | orchestrator | Monday 09 March 2026 00:51:42 +0000 (0:00:00.094) 0:02:05.434 ********** 2026-03-09 00:52:07.968547 | orchestrator | 2026-03-09 00:52:07.968552 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-09 00:52:07.968557 | orchestrator | Monday 09 March 2026 00:51:42 +0000 (0:00:00.083) 0:02:05.517 ********** 2026-03-09 00:52:07.968562 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:07.968567 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:07.968572 | orchestrator | 2026-03-09 00:52:07.968579 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-09 00:52:07.968584 | orchestrator | Monday 09 March 2026 00:51:49 +0000 (0:00:06.420) 0:02:11.937 ********** 2026-03-09 00:52:07.968589 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:07.968594 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:07.968599 | orchestrator | 2026-03-09 00:52:07.968604 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-09 00:52:07.968609 | orchestrator | Monday 09 March 2026 00:51:55 +0000 (0:00:06.646) 0:02:18.584 ********** 2026-03-09 00:52:07.968613 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:52:07.968618 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:52:07.968623 | orchestrator | 2026-03-09 00:52:07.968628 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-09 00:52:07.968633 | orchestrator | Monday 09 March 2026 00:52:02 +0000 (0:00:06.873) 0:02:25.457 ********** 2026-03-09 00:52:07.968643 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:52:07.968648 | orchestrator | 2026-03-09 00:52:07.968652 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-09 00:52:07.968657 | orchestrator | Monday 09 March 2026 00:52:02 +0000 (0:00:00.151) 0:02:25.609 ********** 2026-03-09 00:52:07.968662 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:07.968667 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:07.968672 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:07.968677 | orchestrator | 2026-03-09 00:52:07.968681 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-09 00:52:07.968686 | orchestrator | Monday 09 March 2026 00:52:03 +0000 (0:00:00.832) 0:02:26.441 ********** 2026-03-09 00:52:07.968691 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:07.968696 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:07.968701 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:07.968706 | orchestrator | 2026-03-09 00:52:07.968714 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-09 00:52:07.968719 | orchestrator | Monday 09 March 2026 00:52:04 +0000 (0:00:00.628) 0:02:27.069 ********** 2026-03-09 00:52:07.968723 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:07.968728 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:07.968733 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:07.968738 | orchestrator | 2026-03-09 00:52:07.968743 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-09 00:52:07.968748 | orchestrator | Monday 09 March 2026 00:52:04 +0000 (0:00:00.826) 0:02:27.895 ********** 2026-03-09 00:52:07.968752 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:52:07.968757 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:52:07.968762 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:52:07.968767 | orchestrator | 2026-03-09 00:52:07.968772 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-09 00:52:07.968777 | orchestrator | Monday 09 March 2026 00:52:05 +0000 (0:00:00.716) 0:02:28.612 ********** 2026-03-09 00:52:07.968782 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:07.968786 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:07.968791 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:07.968796 | orchestrator | 2026-03-09 00:52:07.968801 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-09 00:52:07.968806 | orchestrator | Monday 09 March 2026 00:52:06 +0000 (0:00:00.868) 0:02:29.480 ********** 2026-03-09 00:52:07.968811 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:52:07.968815 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:52:07.968820 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:52:07.968825 | orchestrator | 2026-03-09 00:52:07.968830 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:52:07.968835 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-09 00:52:07.968840 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-09 00:52:07.968845 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-09 00:52:07.968850 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:52:07.968855 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:52:07.968860 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 00:52:07.968865 | orchestrator | 2026-03-09 00:52:07.968870 | orchestrator | 2026-03-09 00:52:07.968875 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:52:07.968879 | orchestrator | Monday 09 March 2026 00:52:07 +0000 (0:00:00.915) 0:02:30.396 ********** 2026-03-09 00:52:07.968884 | orchestrator | =============================================================================== 2026-03-09 00:52:07.968889 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 33.68s 2026-03-09 00:52:07.968894 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.41s 2026-03-09 00:52:07.968899 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 10.90s 2026-03-09 00:52:07.968903 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.83s 2026-03-09 00:52:07.968908 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 9.28s 2026-03-09 00:52:07.968913 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.18s 2026-03-09 00:52:07.968918 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.12s 2026-03-09 00:52:07.968927 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.35s 2026-03-09 00:52:07.968932 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.73s 2026-03-09 00:52:07.968937 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.54s 2026-03-09 00:52:07.968942 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.46s 2026-03-09 00:52:07.968947 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 2.14s 2026-03-09 00:52:07.968952 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 2.08s 2026-03-09 00:52:07.968956 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.94s 2026-03-09 00:52:07.968961 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.76s 2026-03-09 00:52:07.968968 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.72s 2026-03-09 00:52:07.968973 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.69s 2026-03-09 00:52:07.968978 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.59s 2026-03-09 00:52:07.968983 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.59s 2026-03-09 00:52:07.968988 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.55s 2026-03-09 00:52:10.997746 | orchestrator | 2026-03-09 00:52:10 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:52:10.999748 | orchestrator | 2026-03-09 00:52:10 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:52:11.000045 | orchestrator | 2026-03-09 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:14.054806 | orchestrator | 2026-03-09 00:52:14 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:52:14.057189 | orchestrator | 2026-03-09 00:52:14 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:52:14.057320 | orchestrator | 2026-03-09 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:17.104783 | orchestrator | 2026-03-09 00:52:17 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:52:17.105852 | orchestrator | 2026-03-09 00:52:17 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:52:17.105890 | orchestrator | 2026-03-09 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:20.159499 | orchestrator | 2026-03-09 00:52:20 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:52:20.160664 | orchestrator | 2026-03-09 00:52:20 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:52:20.160841 | orchestrator | 2026-03-09 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:23.212459 | orchestrator | 2026-03-09 00:52:23 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:52:23.214462 | orchestrator | 2026-03-09 00:52:23 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:52:23.214537 | orchestrator | 2026-03-09 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:26.256685 | orchestrator | 2026-03-09 00:52:26 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:52:26.258984 | orchestrator | 2026-03-09 00:52:26 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:52:26.259040 | orchestrator | 2026-03-09 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:29.310503 | orchestrator | 2026-03-09 00:52:29 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:52:29.312469 | orchestrator | 2026-03-09 00:52:29 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:52:29.312641 | orchestrator | 2026-03-09 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:32.361647 | orchestrator | 2026-03-09 00:52:32 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:52:32.364219 | orchestrator | 2026-03-09 00:52:32 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:52:32.364820 | orchestrator | 2026-03-09 00:52:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:35.410735 | orchestrator | 2026-03-09 00:52:35 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:52:35.411622 | orchestrator | 2026-03-09 00:52:35 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:52:35.411690 | orchestrator | 2026-03-09 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:38.454621 | orchestrator | 2026-03-09 00:52:38 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:52:38.456035 | orchestrator | 2026-03-09 00:52:38 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:52:38.456085 | orchestrator | 2026-03-09 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:41.508077 | orchestrator | 2026-03-09 00:52:41 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:52:41.508147 | orchestrator | 2026-03-09 00:52:41 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:52:41.508154 | orchestrator | 2026-03-09 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:44.573538 | orchestrator | 2026-03-09 00:52:44 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:52:44.575563 | orchestrator | 2026-03-09 00:52:44 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:52:44.575627 | orchestrator | 2026-03-09 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:47.614358 | orchestrator | 2026-03-09 00:52:47 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:52:47.617425 | orchestrator | 2026-03-09 00:52:47 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:52:47.617528 | orchestrator | 2026-03-09 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:50.661136 | orchestrator | 2026-03-09 00:52:50 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:52:50.662156 | orchestrator | 2026-03-09 00:52:50 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:52:50.662233 | orchestrator | 2026-03-09 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:53.713377 | orchestrator | 2026-03-09 00:52:53 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:52:53.718995 | orchestrator | 2026-03-09 00:52:53 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:52:53.719070 | orchestrator | 2026-03-09 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:56.765929 | orchestrator | 2026-03-09 00:52:56 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:52:56.766865 | orchestrator | 2026-03-09 00:52:56 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:52:56.767597 | orchestrator | 2026-03-09 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:52:59.810378 | orchestrator | 2026-03-09 00:52:59 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:52:59.812387 | orchestrator | 2026-03-09 00:52:59 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:52:59.812511 | orchestrator | 2026-03-09 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:02.851287 | orchestrator | 2026-03-09 00:53:02 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:53:02.851617 | orchestrator | 2026-03-09 00:53:02 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:53:02.851643 | orchestrator | 2026-03-09 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:05.890320 | orchestrator | 2026-03-09 00:53:05 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:53:05.893907 | orchestrator | 2026-03-09 00:53:05 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:53:05.893961 | orchestrator | 2026-03-09 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:08.936216 | orchestrator | 2026-03-09 00:53:08 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:53:08.938422 | orchestrator | 2026-03-09 00:53:08 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:53:08.938735 | orchestrator | 2026-03-09 00:53:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:12.007355 | orchestrator | 2026-03-09 00:53:12 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:53:12.012060 | orchestrator | 2026-03-09 00:53:12 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:53:12.012146 | orchestrator | 2026-03-09 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:15.058128 | orchestrator | 2026-03-09 00:53:15 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:53:15.058770 | orchestrator | 2026-03-09 00:53:15 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:53:15.059663 | orchestrator | 2026-03-09 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:18.106055 | orchestrator | 2026-03-09 00:53:18 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:53:18.108188 | orchestrator | 2026-03-09 00:53:18 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:53:18.108398 | orchestrator | 2026-03-09 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:21.148993 | orchestrator | 2026-03-09 00:53:21 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:53:21.152329 | orchestrator | 2026-03-09 00:53:21 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:53:21.152419 | orchestrator | 2026-03-09 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:24.202913 | orchestrator | 2026-03-09 00:53:24 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:53:24.205418 | orchestrator | 2026-03-09 00:53:24 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:53:24.205558 | orchestrator | 2026-03-09 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:27.248889 | orchestrator | 2026-03-09 00:53:27 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:53:27.251660 | orchestrator | 2026-03-09 00:53:27 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:53:27.251762 | orchestrator | 2026-03-09 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:30.300888 | orchestrator | 2026-03-09 00:53:30 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:53:30.302282 | orchestrator | 2026-03-09 00:53:30 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:53:30.302327 | orchestrator | 2026-03-09 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:33.356081 | orchestrator | 2026-03-09 00:53:33 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:53:33.360651 | orchestrator | 2026-03-09 00:53:33 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:53:33.360712 | orchestrator | 2026-03-09 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:36.401061 | orchestrator | 2026-03-09 00:53:36 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:53:36.402476 | orchestrator | 2026-03-09 00:53:36 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:53:36.402545 | orchestrator | 2026-03-09 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:39.448617 | orchestrator | 2026-03-09 00:53:39 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:53:39.450066 | orchestrator | 2026-03-09 00:53:39 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:53:39.450246 | orchestrator | 2026-03-09 00:53:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:42.503444 | orchestrator | 2026-03-09 00:53:42 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:53:42.505619 | orchestrator | 2026-03-09 00:53:42 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:53:42.505712 | orchestrator | 2026-03-09 00:53:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:45.550054 | orchestrator | 2026-03-09 00:53:45 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:53:45.551028 | orchestrator | 2026-03-09 00:53:45 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:53:45.552366 | orchestrator | 2026-03-09 00:53:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:48.608628 | orchestrator | 2026-03-09 00:53:48 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:53:48.610436 | orchestrator | 2026-03-09 00:53:48 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:53:48.610473 | orchestrator | 2026-03-09 00:53:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:51.671014 | orchestrator | 2026-03-09 00:53:51 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:53:51.675200 | orchestrator | 2026-03-09 00:53:51 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:53:51.675316 | orchestrator | 2026-03-09 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:54.712074 | orchestrator | 2026-03-09 00:53:54 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:53:54.717551 | orchestrator | 2026-03-09 00:53:54 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:53:54.717656 | orchestrator | 2026-03-09 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:53:57.764918 | orchestrator | 2026-03-09 00:53:57 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:53:57.767258 | orchestrator | 2026-03-09 00:53:57 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:53:57.767338 | orchestrator | 2026-03-09 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:00.824968 | orchestrator | 2026-03-09 00:54:00 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:54:00.826960 | orchestrator | 2026-03-09 00:54:00 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:54:00.827045 | orchestrator | 2026-03-09 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:03.896995 | orchestrator | 2026-03-09 00:54:03 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:54:03.897145 | orchestrator | 2026-03-09 00:54:03 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:54:03.897166 | orchestrator | 2026-03-09 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:06.945611 | orchestrator | 2026-03-09 00:54:06 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:54:06.946286 | orchestrator | 2026-03-09 00:54:06 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:54:06.946326 | orchestrator | 2026-03-09 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:09.992396 | orchestrator | 2026-03-09 00:54:09 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:54:09.992929 | orchestrator | 2026-03-09 00:54:09 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:54:09.992966 | orchestrator | 2026-03-09 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:13.038795 | orchestrator | 2026-03-09 00:54:13 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:54:13.039364 | orchestrator | 2026-03-09 00:54:13 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:54:13.039393 | orchestrator | 2026-03-09 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:16.081329 | orchestrator | 2026-03-09 00:54:16 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:54:16.084315 | orchestrator | 2026-03-09 00:54:16 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:54:16.084403 | orchestrator | 2026-03-09 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:19.118780 | orchestrator | 2026-03-09 00:54:19 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:54:19.120024 | orchestrator | 2026-03-09 00:54:19 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:54:19.120157 | orchestrator | 2026-03-09 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:22.169979 | orchestrator | 2026-03-09 00:54:22 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:54:22.171986 | orchestrator | 2026-03-09 00:54:22 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:54:22.172053 | orchestrator | 2026-03-09 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:25.216884 | orchestrator | 2026-03-09 00:54:25 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:54:25.217698 | orchestrator | 2026-03-09 00:54:25 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:54:25.217730 | orchestrator | 2026-03-09 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:28.267367 | orchestrator | 2026-03-09 00:54:28 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:54:28.267699 | orchestrator | 2026-03-09 00:54:28 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:54:28.267739 | orchestrator | 2026-03-09 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:31.315763 | orchestrator | 2026-03-09 00:54:31 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:54:31.316769 | orchestrator | 2026-03-09 00:54:31 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:54:31.316932 | orchestrator | 2026-03-09 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:34.353853 | orchestrator | 2026-03-09 00:54:34 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:54:34.357183 | orchestrator | 2026-03-09 00:54:34 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:54:34.357225 | orchestrator | 2026-03-09 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:37.397112 | orchestrator | 2026-03-09 00:54:37 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:54:37.397220 | orchestrator | 2026-03-09 00:54:37 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:54:37.397237 | orchestrator | 2026-03-09 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:40.440137 | orchestrator | 2026-03-09 00:54:40 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:54:40.441348 | orchestrator | 2026-03-09 00:54:40 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:54:40.441393 | orchestrator | 2026-03-09 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:43.493869 | orchestrator | 2026-03-09 00:54:43 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:54:43.496780 | orchestrator | 2026-03-09 00:54:43 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:54:43.496835 | orchestrator | 2026-03-09 00:54:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:46.549911 | orchestrator | 2026-03-09 00:54:46 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:54:46.550527 | orchestrator | 2026-03-09 00:54:46 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:54:46.551828 | orchestrator | 2026-03-09 00:54:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:49.595434 | orchestrator | 2026-03-09 00:54:49 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:54:49.595602 | orchestrator | 2026-03-09 00:54:49 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:54:49.595622 | orchestrator | 2026-03-09 00:54:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:52.646484 | orchestrator | 2026-03-09 00:54:52 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:54:52.647866 | orchestrator | 2026-03-09 00:54:52 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:54:52.647987 | orchestrator | 2026-03-09 00:54:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:55.691232 | orchestrator | 2026-03-09 00:54:55 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:54:55.693389 | orchestrator | 2026-03-09 00:54:55 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:54:55.693624 | orchestrator | 2026-03-09 00:54:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:54:58.755195 | orchestrator | 2026-03-09 00:54:58 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:54:58.757451 | orchestrator | 2026-03-09 00:54:58 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:54:58.757502 | orchestrator | 2026-03-09 00:54:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:01.802806 | orchestrator | 2026-03-09 00:55:01 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:55:01.802978 | orchestrator | 2026-03-09 00:55:01 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:55:01.803280 | orchestrator | 2026-03-09 00:55:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:04.855424 | orchestrator | 2026-03-09 00:55:04 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:55:04.858967 | orchestrator | 2026-03-09 00:55:04 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:55:04.859296 | orchestrator | 2026-03-09 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:07.908372 | orchestrator | 2026-03-09 00:55:07 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:55:07.911884 | orchestrator | 2026-03-09 00:55:07 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:55:07.911917 | orchestrator | 2026-03-09 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:10.956719 | orchestrator | 2026-03-09 00:55:10 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:55:10.961369 | orchestrator | 2026-03-09 00:55:10 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:55:10.961429 | orchestrator | 2026-03-09 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:14.038807 | orchestrator | 2026-03-09 00:55:14 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:55:14.042501 | orchestrator | 2026-03-09 00:55:14 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:55:14.042570 | orchestrator | 2026-03-09 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:17.100670 | orchestrator | 2026-03-09 00:55:17 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:55:17.102263 | orchestrator | 2026-03-09 00:55:17 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:55:17.102313 | orchestrator | 2026-03-09 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:20.148541 | orchestrator | 2026-03-09 00:55:20 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:55:20.149858 | orchestrator | 2026-03-09 00:55:20 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:55:20.149973 | orchestrator | 2026-03-09 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:23.206916 | orchestrator | 2026-03-09 00:55:23 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:55:23.208332 | orchestrator | 2026-03-09 00:55:23 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:55:23.208443 | orchestrator | 2026-03-09 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:26.260530 | orchestrator | 2026-03-09 00:55:26 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:55:26.264752 | orchestrator | 2026-03-09 00:55:26 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:55:26.264844 | orchestrator | 2026-03-09 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:29.314923 | orchestrator | 2026-03-09 00:55:29 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:55:29.316892 | orchestrator | 2026-03-09 00:55:29 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:55:29.316945 | orchestrator | 2026-03-09 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:32.361944 | orchestrator | 2026-03-09 00:55:32 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:55:32.364609 | orchestrator | 2026-03-09 00:55:32 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state STARTED 2026-03-09 00:55:32.364665 | orchestrator | 2026-03-09 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:35.408657 | orchestrator | 2026-03-09 00:55:35 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:55:35.408793 | orchestrator | 2026-03-09 00:55:35 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:55:35.416327 | orchestrator | 2026-03-09 00:55:35 | INFO  | Task 6240b3c2-4dcd-4257-b765-193c305df9c1 is in state SUCCESS 2026-03-09 00:55:35.417850 | orchestrator | 2026-03-09 00:55:35.417903 | orchestrator | 2026-03-09 00:55:35.417912 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:55:35.417921 | orchestrator | 2026-03-09 00:55:35.417929 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 00:55:35.417937 | orchestrator | Monday 09 March 2026 00:48:23 +0000 (0:00:00.370) 0:00:00.370 ********** 2026-03-09 00:55:35.417945 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:35.417953 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:35.417961 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:35.417968 | orchestrator | 2026-03-09 00:55:35.417976 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 00:55:35.417984 | orchestrator | Monday 09 March 2026 00:48:23 +0000 (0:00:00.467) 0:00:00.837 ********** 2026-03-09 00:55:35.417992 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-09 00:55:35.418205 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-09 00:55:35.418215 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-09 00:55:35.418222 | orchestrator | 2026-03-09 00:55:35.418270 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-09 00:55:35.418280 | orchestrator | 2026-03-09 00:55:35.418287 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-09 00:55:35.418321 | orchestrator | Monday 09 March 2026 00:48:24 +0000 (0:00:00.494) 0:00:01.332 ********** 2026-03-09 00:55:35.418334 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:35.418347 | orchestrator | 2026-03-09 00:55:35.418359 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-09 00:55:35.418371 | orchestrator | Monday 09 March 2026 00:48:24 +0000 (0:00:00.755) 0:00:02.088 ********** 2026-03-09 00:55:35.418382 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:35.418395 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:35.418514 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:35.418527 | orchestrator | 2026-03-09 00:55:35.418538 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-09 00:55:35.418550 | orchestrator | Monday 09 March 2026 00:48:25 +0000 (0:00:00.802) 0:00:02.891 ********** 2026-03-09 00:55:35.418559 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:35.418568 | orchestrator | 2026-03-09 00:55:35.418576 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-09 00:55:35.418603 | orchestrator | Monday 09 March 2026 00:48:26 +0000 (0:00:00.748) 0:00:03.639 ********** 2026-03-09 00:55:35.418611 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:35.418618 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:35.418625 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:35.418633 | orchestrator | 2026-03-09 00:55:35.418640 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-09 00:55:35.418647 | orchestrator | Monday 09 March 2026 00:48:27 +0000 (0:00:00.633) 0:00:04.273 ********** 2026-03-09 00:55:35.418655 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-09 00:55:35.418662 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-09 00:55:35.418669 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-09 00:55:35.418677 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-09 00:55:35.418684 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-09 00:55:35.418692 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-09 00:55:35.418700 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-09 00:55:35.418707 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-09 00:55:35.418714 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-09 00:55:35.418721 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-09 00:55:35.418729 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-09 00:55:35.418736 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-09 00:55:35.418743 | orchestrator | 2026-03-09 00:55:35.418764 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-09 00:55:35.418772 | orchestrator | Monday 09 March 2026 00:48:30 +0000 (0:00:03.275) 0:00:07.548 ********** 2026-03-09 00:55:35.418780 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-09 00:55:35.418792 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-09 00:55:35.418802 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-09 00:55:35.418809 | orchestrator | 2026-03-09 00:55:35.418817 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-09 00:55:35.418824 | orchestrator | Monday 09 March 2026 00:48:31 +0000 (0:00:00.822) 0:00:08.371 ********** 2026-03-09 00:55:35.418831 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-09 00:55:35.418839 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-09 00:55:35.418846 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-09 00:55:35.418853 | orchestrator | 2026-03-09 00:55:35.418907 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-09 00:55:35.418922 | orchestrator | Monday 09 March 2026 00:48:32 +0000 (0:00:01.456) 0:00:09.827 ********** 2026-03-09 00:55:35.418934 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-09 00:55:35.418946 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.418974 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-09 00:55:35.418987 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.419020 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-09 00:55:35.419125 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.419136 | orchestrator | 2026-03-09 00:55:35.419168 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-09 00:55:35.419177 | orchestrator | Monday 09 March 2026 00:48:33 +0000 (0:00:00.887) 0:00:10.715 ********** 2026-03-09 00:55:35.419197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-09 00:55:35.419215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-09 00:55:35.419224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-09 00:55:35.419231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:55:35.419240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:55:35.419316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:55:35.419327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:55:35.419352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:55:35.419372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:55:35.419385 | orchestrator | 2026-03-09 00:55:35.419529 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-09 00:55:35.419538 | orchestrator | Monday 09 March 2026 00:48:36 +0000 (0:00:02.529) 0:00:13.245 ********** 2026-03-09 00:55:35.419546 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.419553 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.419561 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.419568 | orchestrator | 2026-03-09 00:55:35.419575 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-09 00:55:35.419583 | orchestrator | Monday 09 March 2026 00:48:37 +0000 (0:00:01.820) 0:00:15.065 ********** 2026-03-09 00:55:35.419590 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-09 00:55:35.419598 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-09 00:55:35.419605 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-09 00:55:35.419624 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-09 00:55:35.419632 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-09 00:55:35.419640 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-09 00:55:35.419647 | orchestrator | 2026-03-09 00:55:35.419654 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-09 00:55:35.419661 | orchestrator | Monday 09 March 2026 00:48:40 +0000 (0:00:02.686) 0:00:17.752 ********** 2026-03-09 00:55:35.419669 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.419676 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.419686 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.419698 | orchestrator | 2026-03-09 00:55:35.419715 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-09 00:55:35.419723 | orchestrator | Monday 09 March 2026 00:48:42 +0000 (0:00:02.093) 0:00:19.845 ********** 2026-03-09 00:55:35.419730 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:35.419738 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:35.419745 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:35.419752 | orchestrator | 2026-03-09 00:55:35.419759 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-09 00:55:35.419767 | orchestrator | Monday 09 March 2026 00:48:45 +0000 (0:00:02.408) 0:00:22.254 ********** 2026-03-09 00:55:35.419775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.419797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.419805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.419813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.419836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__4ed9dd1b1860209e7e12b6660115cfb8a5aa33bd', '__omit_place_holder__4ed9dd1b1860209e7e12b6660115cfb8a5aa33bd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-09 00:55:35.419844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.419923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.419949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__4ed9dd1b1860209e7e12b6660115cfb8a5aa33bd', '__omit_place_holder__4ed9dd1b1860209e7e12b6660115cfb8a5aa33bd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-09 00:55:35.419957 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.419964 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.419979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.419988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.420128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.420138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__4ed9dd1b1860209e7e12b6660115cfb8a5aa33bd', '__omit_place_holder__4ed9dd1b1860209e7e12b6660115cfb8a5aa33bd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-09 00:55:35.420146 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.420154 | orchestrator | 2026-03-09 00:55:35.420161 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-09 00:55:35.420169 | orchestrator | Monday 09 March 2026 00:48:46 +0000 (0:00:00.936) 0:00:23.191 ********** 2026-03-09 00:55:35.420177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-09 00:55:35.420192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-09 00:55:35.420233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-09 00:55:35.420246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:55:35.420258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.420266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:55:35.420274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__4ed9dd1b1860209e7e12b6660115cfb8a5aa33bd', '__omit_place_holder__4ed9dd1b1860209e7e12b6660115cfb8a5aa33bd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-09 00:55:35.420287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.420294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__4ed9dd1b1860209e7e12b6660115cfb8a5aa33bd', '__omit_place_holder__4ed9dd1b1860209e7e12b6660115cfb8a5aa33bd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-09 00:55:35.420307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:55:35.420315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.420327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__4ed9dd1b1860209e7e12b6660115cfb8a5aa33bd', '__omit_place_holder__4ed9dd1b1860209e7e12b6660115cfb8a5aa33bd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-09 00:55:35.420335 | orchestrator | 2026-03-09 00:55:35.420342 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-09 00:55:35.420350 | orchestrator | Monday 09 March 2026 00:48:48 +0000 (0:00:02.887) 0:00:26.078 ********** 2026-03-09 00:55:35.420357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-09 00:55:35.420370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-09 00:55:35.420378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-09 00:55:35.420392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:55:35.420400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:55:35.420411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:55:35.420420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:55:35.420538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:55:35.420555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:55:35.420563 | orchestrator | 2026-03-09 00:55:35.420571 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-09 00:55:35.420578 | orchestrator | Monday 09 March 2026 00:48:52 +0000 (0:00:03.140) 0:00:29.218 ********** 2026-03-09 00:55:35.420586 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-09 00:55:35.420593 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-09 00:55:35.420601 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-09 00:55:35.420608 | orchestrator | 2026-03-09 00:55:35.420615 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-09 00:55:35.420685 | orchestrator | Monday 09 March 2026 00:48:56 +0000 (0:00:04.189) 0:00:33.407 ********** 2026-03-09 00:55:35.420694 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-09 00:55:35.420702 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-09 00:55:35.420709 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-09 00:55:35.420717 | orchestrator | 2026-03-09 00:55:35.426234 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-09 00:55:35.426297 | orchestrator | Monday 09 March 2026 00:49:00 +0000 (0:00:04.284) 0:00:37.691 ********** 2026-03-09 00:55:35.426306 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.426313 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.426318 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.426324 | orchestrator | 2026-03-09 00:55:35.426330 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-09 00:55:35.426336 | orchestrator | Monday 09 March 2026 00:49:01 +0000 (0:00:01.012) 0:00:38.704 ********** 2026-03-09 00:55:35.426342 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-09 00:55:35.426349 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-09 00:55:35.426355 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-09 00:55:35.426361 | orchestrator | 2026-03-09 00:55:35.426366 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-09 00:55:35.426372 | orchestrator | Monday 09 March 2026 00:49:05 +0000 (0:00:03.447) 0:00:42.152 ********** 2026-03-09 00:55:35.426378 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-09 00:55:35.426384 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-09 00:55:35.426418 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-09 00:55:35.426424 | orchestrator | 2026-03-09 00:55:35.426430 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-09 00:55:35.426435 | orchestrator | Monday 09 March 2026 00:49:08 +0000 (0:00:03.354) 0:00:45.506 ********** 2026-03-09 00:55:35.426441 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-09 00:55:35.426447 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-09 00:55:35.426452 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-09 00:55:35.426458 | orchestrator | 2026-03-09 00:55:35.426463 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-09 00:55:35.426469 | orchestrator | Monday 09 March 2026 00:49:10 +0000 (0:00:02.296) 0:00:47.803 ********** 2026-03-09 00:55:35.426474 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-09 00:55:35.426480 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-09 00:55:35.426486 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-09 00:55:35.426491 | orchestrator | 2026-03-09 00:55:35.426497 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-09 00:55:35.426502 | orchestrator | Monday 09 March 2026 00:49:12 +0000 (0:00:02.230) 0:00:50.034 ********** 2026-03-09 00:55:35.426508 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:35.426513 | orchestrator | 2026-03-09 00:55:35.426519 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-09 00:55:35.426524 | orchestrator | Monday 09 March 2026 00:49:13 +0000 (0:00:00.994) 0:00:51.029 ********** 2026-03-09 00:55:35.426531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-09 00:55:35.426540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-09 00:55:35.426558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-09 00:55:35.426564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:55:35.426579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:55:35.426585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:55:35.426591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:55:35.426598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:55:35.426604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:55:35.426610 | orchestrator | 2026-03-09 00:55:35.426616 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-09 00:55:35.426621 | orchestrator | Monday 09 March 2026 00:49:19 +0000 (0:00:05.346) 0:00:56.375 ********** 2026-03-09 00:55:35.426633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.426643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.426652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.426658 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.426664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.426670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.426675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.426681 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.426687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.426696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.426706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.426712 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.426717 | orchestrator | 2026-03-09 00:55:35.426723 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-09 00:55:35.426729 | orchestrator | Monday 09 March 2026 00:49:20 +0000 (0:00:01.546) 0:00:57.922 ********** 2026-03-09 00:55:35.426737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.426743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.426749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.426755 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.426761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.426771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.426781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.426787 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.426795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.426801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.426807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.426813 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.426818 | orchestrator | 2026-03-09 00:55:35.426824 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-09 00:55:35.426830 | orchestrator | Monday 09 March 2026 00:49:23 +0000 (0:00:02.487) 0:01:00.411 ********** 2026-03-09 00:55:35.426835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.426845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.426855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.426861 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.426866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.426872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.426878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.426904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.426910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.426923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.426929 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.426934 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.426940 | orchestrator | 2026-03-09 00:55:35.426945 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-09 00:55:35.426951 | orchestrator | Monday 09 March 2026 00:49:25 +0000 (0:00:02.362) 0:01:02.773 ********** 2026-03-09 00:55:35.426957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.426965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.426971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.426977 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.426983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.426988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.427020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.427027 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.427037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.427043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.427052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.427058 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.427064 | orchestrator | 2026-03-09 00:55:35.427069 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-09 00:55:35.427075 | orchestrator | Monday 09 March 2026 00:49:26 +0000 (0:00:00.676) 0:01:03.450 ********** 2026-03-09 00:55:35.427080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.427086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.427096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.427101 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.427110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.427116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.427122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.427131 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.427137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.427142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.427152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.427157 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.427163 | orchestrator | 2026-03-09 00:55:35.427168 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-09 00:55:35.427174 | orchestrator | Monday 09 March 2026 00:49:27 +0000 (0:00:00.957) 0:01:04.407 ********** 2026-03-09 00:55:35.427180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.427189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.427195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.427201 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.427210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.427216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.427226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.427232 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.427237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.427246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.427252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.427257 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.427263 | orchestrator | 2026-03-09 00:55:35.427268 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-09 00:55:35.427274 | orchestrator | Monday 09 March 2026 00:49:28 +0000 (0:00:01.038) 0:01:05.446 ********** 2026-03-09 00:55:35.427282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.427288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.427298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.427303 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.427309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.427315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.427327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.427333 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.427338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.427347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.427356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.427362 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.427368 | orchestrator | 2026-03-09 00:55:35.427373 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-09 00:55:35.427379 | orchestrator | Monday 09 March 2026 00:49:28 +0000 (0:00:00.680) 0:01:06.127 ********** 2026-03-09 00:55:35.427385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.427390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.427396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.427402 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.427422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.427428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.427441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.427447 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.427452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-09 00:55:35.427458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-09 00:55:35.427464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-09 00:55:35.427469 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.427475 | orchestrator | 2026-03-09 00:55:35.427480 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-09 00:55:35.427486 | orchestrator | Monday 09 March 2026 00:49:30 +0000 (0:00:01.060) 0:01:07.188 ********** 2026-03-09 00:55:35.427491 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-09 00:55:35.427498 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-09 00:55:35.427507 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-09 00:55:35.427513 | orchestrator | 2026-03-09 00:55:35.427518 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-09 00:55:35.427524 | orchestrator | Monday 09 March 2026 00:49:32 +0000 (0:00:02.059) 0:01:09.247 ********** 2026-03-09 00:55:35.427529 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-09 00:55:35.427535 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-09 00:55:35.427540 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-09 00:55:35.427546 | orchestrator | 2026-03-09 00:55:35.427551 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-09 00:55:35.427561 | orchestrator | Monday 09 March 2026 00:49:33 +0000 (0:00:01.651) 0:01:10.898 ********** 2026-03-09 00:55:35.427566 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-09 00:55:35.427572 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-09 00:55:35.427577 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-09 00:55:35.427583 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-09 00:55:35.427588 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.427594 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-09 00:55:35.427600 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.427608 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-09 00:55:35.427614 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.427619 | orchestrator | 2026-03-09 00:55:35.427625 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-09 00:55:35.427630 | orchestrator | Monday 09 March 2026 00:49:34 +0000 (0:00:01.161) 0:01:12.060 ********** 2026-03-09 00:55:35.427636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-09 00:55:35.427642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-09 00:55:35.427648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-09 00:55:35.427657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:55:35.427663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:55:35.427674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-09 00:55:35.427683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:55:35.427689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:55:35.427695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-09 00:55:35.427700 | orchestrator | 2026-03-09 00:55:35.427706 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-09 00:55:35.427711 | orchestrator | Monday 09 March 2026 00:49:37 +0000 (0:00:03.049) 0:01:15.109 ********** 2026-03-09 00:55:35.427717 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:35.427723 | orchestrator | 2026-03-09 00:55:35.427728 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-09 00:55:35.427734 | orchestrator | Monday 09 March 2026 00:49:38 +0000 (0:00:00.755) 0:01:15.864 ********** 2026-03-09 00:55:35.427740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-09 00:55:35.427755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-09 00:55:35.427761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.427770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.427776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-09 00:55:35.427782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-09 00:55:35.427787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.427806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.427812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-09 00:55:35.427821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-09 00:55:35.427827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.427832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.427838 | orchestrator | 2026-03-09 00:55:35.427843 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-09 00:55:35.427849 | orchestrator | Monday 09 March 2026 00:49:43 +0000 (0:00:05.084) 0:01:20.949 ********** 2026-03-09 00:55:35.427855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-09 00:55:35.427869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-09 00:55:35.427875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.427884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.427890 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.427895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-09 00:55:35.427901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-09 00:55:35.427907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.427920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-09 00:55:35.427926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.427934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-09 00:55:35.427940 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.427946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.427952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.427957 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.427963 | orchestrator | 2026-03-09 00:55:35.427968 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-09 00:55:35.427974 | orchestrator | Monday 09 March 2026 00:49:45 +0000 (0:00:01.549) 0:01:22.498 ********** 2026-03-09 00:55:35.427980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-09 00:55:35.427991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-09 00:55:35.428012 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.428019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-09 00:55:35.428025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-09 00:55:35.428030 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.428036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-09 00:55:35.428042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-09 00:55:35.428047 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.428053 | orchestrator | 2026-03-09 00:55:35.428062 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-09 00:55:35.428067 | orchestrator | Monday 09 March 2026 00:49:46 +0000 (0:00:01.170) 0:01:23.669 ********** 2026-03-09 00:55:35.428073 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.428079 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.428084 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.428090 | orchestrator | 2026-03-09 00:55:35.428095 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-09 00:55:35.428101 | orchestrator | Monday 09 March 2026 00:49:47 +0000 (0:00:01.463) 0:01:25.133 ********** 2026-03-09 00:55:35.428106 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.428112 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.428117 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.428123 | orchestrator | 2026-03-09 00:55:35.428128 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-09 00:55:35.428134 | orchestrator | Monday 09 March 2026 00:49:50 +0000 (0:00:02.312) 0:01:27.445 ********** 2026-03-09 00:55:35.428139 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:35.428145 | orchestrator | 2026-03-09 00:55:35.428150 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-09 00:55:35.428155 | orchestrator | Monday 09 March 2026 00:49:51 +0000 (0:00:00.940) 0:01:28.386 ********** 2026-03-09 00:55:35.428164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 00:55:35.428171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.428182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.428188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 00:55:35.428197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.428203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.428212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 00:55:35.428222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.428227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.428233 | orchestrator | 2026-03-09 00:55:35.428239 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-09 00:55:35.428244 | orchestrator | Monday 09 March 2026 00:49:56 +0000 (0:00:05.078) 0:01:33.464 ********** 2026-03-09 00:55:35.428253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-09 00:55:35.428259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.428269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.428275 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.428285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-09 00:55:35.428290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.428296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.428302 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.428311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-09 00:55:35.428317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.428326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.428335 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.428341 | orchestrator | 2026-03-09 00:55:35.428346 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-09 00:55:35.428352 | orchestrator | Monday 09 March 2026 00:49:57 +0000 (0:00:00.836) 0:01:34.300 ********** 2026-03-09 00:55:35.428358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-09 00:55:35.428364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-09 00:55:35.428369 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.428375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-09 00:55:35.428381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-09 00:55:35.428386 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.428392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-09 00:55:35.428398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-09 00:55:35.428403 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.428409 | orchestrator | 2026-03-09 00:55:35.428414 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-09 00:55:35.428420 | orchestrator | Monday 09 March 2026 00:49:58 +0000 (0:00:01.188) 0:01:35.489 ********** 2026-03-09 00:55:35.428425 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.428430 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.428436 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.428441 | orchestrator | 2026-03-09 00:55:35.428447 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-09 00:55:35.428452 | orchestrator | Monday 09 March 2026 00:49:59 +0000 (0:00:01.458) 0:01:36.947 ********** 2026-03-09 00:55:35.428458 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.428463 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.428469 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.428474 | orchestrator | 2026-03-09 00:55:35.428483 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-09 00:55:35.428488 | orchestrator | Monday 09 March 2026 00:50:02 +0000 (0:00:02.195) 0:01:39.143 ********** 2026-03-09 00:55:35.428494 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.428499 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.428505 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.428510 | orchestrator | 2026-03-09 00:55:35.428516 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-09 00:55:35.428522 | orchestrator | Monday 09 March 2026 00:50:02 +0000 (0:00:00.364) 0:01:39.508 ********** 2026-03-09 00:55:35.428531 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:35.428536 | orchestrator | 2026-03-09 00:55:35.428542 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-09 00:55:35.428547 | orchestrator | Monday 09 March 2026 00:50:03 +0000 (0:00:00.928) 0:01:40.437 ********** 2026-03-09 00:55:35.428556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-09 00:55:35.428562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-09 00:55:35.428568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-09 00:55:35.428574 | orchestrator | 2026-03-09 00:55:35.428580 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-09 00:55:35.428585 | orchestrator | Monday 09 March 2026 00:50:06 +0000 (0:00:03.016) 0:01:43.453 ********** 2026-03-09 00:55:35.428594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-09 00:55:35.428600 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.428609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-09 00:55:35.428615 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.428621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-09 00:55:35.428627 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.428632 | orchestrator | 2026-03-09 00:55:35.428638 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-09 00:55:35.428643 | orchestrator | Monday 09 March 2026 00:50:08 +0000 (0:00:01.812) 0:01:45.265 ********** 2026-03-09 00:55:35.428651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-09 00:55:35.428657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-09 00:55:35.428664 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.428669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-09 00:55:35.428675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-09 00:55:35.428681 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.428694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-09 00:55:35.428700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-09 00:55:35.428706 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.428711 | orchestrator | 2026-03-09 00:55:35.428717 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-09 00:55:35.428722 | orchestrator | Monday 09 March 2026 00:50:10 +0000 (0:00:02.311) 0:01:47.577 ********** 2026-03-09 00:55:35.428728 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.428733 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.428739 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.428744 | orchestrator | 2026-03-09 00:55:35.428750 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-09 00:55:35.428755 | orchestrator | Monday 09 March 2026 00:50:11 +0000 (0:00:00.952) 0:01:48.530 ********** 2026-03-09 00:55:35.428760 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.428766 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.428771 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.428777 | orchestrator | 2026-03-09 00:55:35.428782 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-09 00:55:35.428797 | orchestrator | Monday 09 March 2026 00:50:12 +0000 (0:00:01.293) 0:01:49.823 ********** 2026-03-09 00:55:35.428806 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:35.428812 | orchestrator | 2026-03-09 00:55:35.428817 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-09 00:55:35.428823 | orchestrator | Monday 09 March 2026 00:50:13 +0000 (0:00:00.769) 0:01:50.593 ********** 2026-03-09 00:55:35.428828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 00:55:35.428835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.428841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.428854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.428861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 00:55:35.428869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.428875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.428881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.428895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})2026-03-09 00:55:35 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:55:35.428902 | orchestrator | 2026-03-09 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:35.429059 | orchestrator | [0m 2026-03-09 00:55:35.429071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429094 | orchestrator | 2026-03-09 00:55:35.429100 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-09 00:55:35.429106 | orchestrator | Monday 09 March 2026 00:50:18 +0000 (0:00:05.374) 0:01:55.967 ********** 2026-03-09 00:55:35.429111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-09 00:55:35.429123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429144 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.429155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-09 00:55:35.429162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429183 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.429192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-09 00:55:35.429201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429222 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.429227 | orchestrator | 2026-03-09 00:55:35.429233 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-09 00:55:35.429239 | orchestrator | Monday 09 March 2026 00:50:19 +0000 (0:00:01.148) 0:01:57.115 ********** 2026-03-09 00:55:35.429245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-09 00:55:35.429250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-09 00:55:35.429256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-09 00:55:35.429262 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.429268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-09 00:55:35.429273 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.429279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-09 00:55:35.429287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-09 00:55:35.429293 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.429298 | orchestrator | 2026-03-09 00:55:35.429304 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-09 00:55:35.429310 | orchestrator | Monday 09 March 2026 00:50:21 +0000 (0:00:01.509) 0:01:58.625 ********** 2026-03-09 00:55:35.429315 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.429320 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.429326 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.429331 | orchestrator | 2026-03-09 00:55:35.429337 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-09 00:55:35.429342 | orchestrator | Monday 09 March 2026 00:50:23 +0000 (0:00:01.671) 0:02:00.297 ********** 2026-03-09 00:55:35.429348 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.429354 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.429359 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.429364 | orchestrator | 2026-03-09 00:55:35.429370 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-09 00:55:35.429375 | orchestrator | Monday 09 March 2026 00:50:25 +0000 (0:00:02.230) 0:02:02.528 ********** 2026-03-09 00:55:35.429381 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.429387 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.429392 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.429397 | orchestrator | 2026-03-09 00:55:35.429406 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-09 00:55:35.429417 | orchestrator | Monday 09 March 2026 00:50:26 +0000 (0:00:00.624) 0:02:03.153 ********** 2026-03-09 00:55:35.429422 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.429428 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.429433 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.429439 | orchestrator | 2026-03-09 00:55:35.429444 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-09 00:55:35.429450 | orchestrator | Monday 09 March 2026 00:50:26 +0000 (0:00:00.423) 0:02:03.576 ********** 2026-03-09 00:55:35.429455 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:35.429461 | orchestrator | 2026-03-09 00:55:35.429466 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-09 00:55:35.429472 | orchestrator | Monday 09 March 2026 00:50:27 +0000 (0:00:00.857) 0:02:04.434 ********** 2026-03-09 00:55:35.429477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 00:55:35.429484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 00:55:35.429490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 00:55:35.429524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 00:55:35.429536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 00:55:35.429581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 00:55:35.429596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429633 | orchestrator | 2026-03-09 00:55:35.429640 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-09 00:55:35.429646 | orchestrator | Monday 09 March 2026 00:50:31 +0000 (0:00:04.652) 0:02:09.086 ********** 2026-03-09 00:55:35.429653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 00:55:35.429663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 00:55:35.429673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429709 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.429719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 00:55:35.429729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 00:55:35.429741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429778 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.429788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 00:55:35.429798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 00:55:35.429804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.429845 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.429851 | orchestrator | 2026-03-09 00:55:35.429857 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-09 00:55:35.429864 | orchestrator | Monday 09 March 2026 00:50:33 +0000 (0:00:01.422) 0:02:10.509 ********** 2026-03-09 00:55:35.429871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-09 00:55:35.429877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-09 00:55:35.429884 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.429890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-09 00:55:35.429900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-09 00:55:35.429905 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.429911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-09 00:55:35.429916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-09 00:55:35.429922 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.429927 | orchestrator | 2026-03-09 00:55:35.429933 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-09 00:55:35.429938 | orchestrator | Monday 09 March 2026 00:50:35 +0000 (0:00:01.893) 0:02:12.403 ********** 2026-03-09 00:55:35.429944 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.429949 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.429955 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.429960 | orchestrator | 2026-03-09 00:55:35.429965 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-09 00:55:35.429971 | orchestrator | Monday 09 March 2026 00:50:37 +0000 (0:00:02.163) 0:02:14.567 ********** 2026-03-09 00:55:35.429976 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.429982 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.429987 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.429993 | orchestrator | 2026-03-09 00:55:35.430165 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-09 00:55:35.430172 | orchestrator | Monday 09 March 2026 00:50:40 +0000 (0:00:02.581) 0:02:17.149 ********** 2026-03-09 00:55:35.430178 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.430183 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.430189 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.430194 | orchestrator | 2026-03-09 00:55:35.430200 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-09 00:55:35.430211 | orchestrator | Monday 09 March 2026 00:50:40 +0000 (0:00:00.589) 0:02:17.738 ********** 2026-03-09 00:55:35.430217 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:35.430223 | orchestrator | 2026-03-09 00:55:35.430228 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-09 00:55:35.430234 | orchestrator | Monday 09 March 2026 00:50:41 +0000 (0:00:01.333) 0:02:19.072 ********** 2026-03-09 00:55:35.430258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 00:55:35.430271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-09 00:55:35.430286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 00:55:35.430302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-09 00:55:35.430308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 00:55:35.430326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-09 00:55:35.430333 | orchestrator | 2026-03-09 00:55:35.430338 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-09 00:55:35.430344 | orchestrator | Monday 09 March 2026 00:50:48 +0000 (0:00:06.663) 0:02:25.736 ********** 2026-03-09 00:55:35.430350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 00:55:35.430371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-09 00:55:35.430378 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.430387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 00:55:35.430402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-09 00:55:35.430409 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.430418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 00:55:35.430434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-09 00:55:35.430441 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.430447 | orchestrator | 2026-03-09 00:55:35.430452 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-09 00:55:35.430458 | orchestrator | Monday 09 March 2026 00:50:55 +0000 (0:00:06.409) 0:02:32.145 ********** 2026-03-09 00:55:35.430463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-09 00:55:35.430473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-09 00:55:35.430479 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.430484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-09 00:55:35.430494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-09 00:55:35.430500 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.430506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-09 00:55:35.430511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-09 00:55:35.430517 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.430523 | orchestrator | 2026-03-09 00:55:35.430528 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-09 00:55:35.430534 | orchestrator | Monday 09 March 2026 00:51:01 +0000 (0:00:06.942) 0:02:39.088 ********** 2026-03-09 00:55:35.430539 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.430545 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.430550 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.430556 | orchestrator | 2026-03-09 00:55:35.430561 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-09 00:55:35.430566 | orchestrator | Monday 09 March 2026 00:51:03 +0000 (0:00:01.453) 0:02:40.542 ********** 2026-03-09 00:55:35.430572 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.430577 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.430583 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.430588 | orchestrator | 2026-03-09 00:55:35.430594 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-09 00:55:35.430611 | orchestrator | Monday 09 March 2026 00:51:05 +0000 (0:00:02.179) 0:02:42.721 ********** 2026-03-09 00:55:35.430616 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.430621 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.430626 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.430631 | orchestrator | 2026-03-09 00:55:35.430636 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-09 00:55:35.430640 | orchestrator | Monday 09 March 2026 00:51:06 +0000 (0:00:00.684) 0:02:43.405 ********** 2026-03-09 00:55:35.430645 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:35.430650 | orchestrator | 2026-03-09 00:55:35.430655 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-09 00:55:35.430660 | orchestrator | Monday 09 March 2026 00:51:07 +0000 (0:00:00.998) 0:02:44.404 ********** 2026-03-09 00:55:35.430668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 00:55:35.430678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 00:55:35.430683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 00:55:35.430688 | orchestrator | 2026-03-09 00:55:35.430693 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-09 00:55:35.430698 | orchestrator | Monday 09 March 2026 00:51:12 +0000 (0:00:05.034) 0:02:49.438 ********** 2026-03-09 00:55:35.430703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-09 00:55:35.430708 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.430723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-09 00:55:35.430729 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.430734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-09 00:55:35.430743 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.430748 | orchestrator | 2026-03-09 00:55:35.430753 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-09 00:55:35.430757 | orchestrator | Monday 09 March 2026 00:51:13 +0000 (0:00:00.734) 0:02:50.173 ********** 2026-03-09 00:55:35.430765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-09 00:55:35.430770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-09 00:55:35.430775 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.430780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-09 00:55:35.430785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-09 00:55:35.430790 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.430795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-09 00:55:35.430800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-09 00:55:35.430805 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.430810 | orchestrator | 2026-03-09 00:55:35.430814 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-09 00:55:35.430819 | orchestrator | Monday 09 March 2026 00:51:13 +0000 (0:00:00.665) 0:02:50.839 ********** 2026-03-09 00:55:35.430824 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.430829 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.430834 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.430839 | orchestrator | 2026-03-09 00:55:35.430843 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-09 00:55:35.430848 | orchestrator | Monday 09 March 2026 00:51:15 +0000 (0:00:01.377) 0:02:52.216 ********** 2026-03-09 00:55:35.430853 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.430858 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.430863 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.430868 | orchestrator | 2026-03-09 00:55:35.430872 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-09 00:55:35.430877 | orchestrator | Monday 09 March 2026 00:51:17 +0000 (0:00:02.198) 0:02:54.415 ********** 2026-03-09 00:55:35.430883 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.430888 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.430893 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.430898 | orchestrator | 2026-03-09 00:55:35.430903 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-09 00:55:35.430908 | orchestrator | Monday 09 March 2026 00:51:17 +0000 (0:00:00.459) 0:02:54.874 ********** 2026-03-09 00:55:35.430912 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:35.430917 | orchestrator | 2026-03-09 00:55:35.430922 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-09 00:55:35.430927 | orchestrator | Monday 09 March 2026 00:51:19 +0000 (0:00:01.426) 0:02:56.301 ********** 2026-03-09 00:55:35.430948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 00:55:35.430955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 00:55:35.430989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 00:55:35.431010 | orchestrator | 2026-03-09 00:55:35.431016 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-09 00:55:35.431021 | orchestrator | Monday 09 March 2026 00:51:24 +0000 (0:00:05.286) 0:03:01.587 ********** 2026-03-09 00:55:35.431036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 00:55:35.431046 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.431055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 00:55:35.431070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 00:55:35.431079 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.431084 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.431089 | orchestrator | 2026-03-09 00:55:35.431094 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-09 00:55:35.431099 | orchestrator | Monday 09 March 2026 00:51:25 +0000 (0:00:01.315) 0:03:02.903 ********** 2026-03-09 00:55:35.431105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-09 00:55:35.431114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-09 00:55:35.431122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-09 00:55:35.431128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-09 00:55:35.431133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-09 00:55:35.431138 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.431143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-09 00:55:35.431148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-09 00:55:35.431153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-09 00:55:35.431164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-09 00:55:35.431169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-09 00:55:35.431175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-09 00:55:35.431189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-09 00:55:35.431194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-09 00:55:35.431199 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.431204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-09 00:55:35.431210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-09 00:55:35.431215 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.431219 | orchestrator | 2026-03-09 00:55:35.431227 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-09 00:55:35.431232 | orchestrator | Monday 09 March 2026 00:51:26 +0000 (0:00:01.014) 0:03:03.917 ********** 2026-03-09 00:55:35.431237 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.431242 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.431247 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.431252 | orchestrator | 2026-03-09 00:55:35.431257 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-09 00:55:35.431262 | orchestrator | Monday 09 March 2026 00:51:28 +0000 (0:00:01.274) 0:03:05.191 ********** 2026-03-09 00:55:35.431266 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.431271 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.431276 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.431281 | orchestrator | 2026-03-09 00:55:35.431286 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-09 00:55:35.431290 | orchestrator | Monday 09 March 2026 00:51:29 +0000 (0:00:01.894) 0:03:07.086 ********** 2026-03-09 00:55:35.431295 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.431300 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.431305 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.431310 | orchestrator | 2026-03-09 00:55:35.431314 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-09 00:55:35.431323 | orchestrator | Monday 09 March 2026 00:51:30 +0000 (0:00:00.286) 0:03:07.372 ********** 2026-03-09 00:55:35.431328 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.431333 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.431338 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.431342 | orchestrator | 2026-03-09 00:55:35.431347 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-09 00:55:35.431352 | orchestrator | Monday 09 March 2026 00:51:30 +0000 (0:00:00.458) 0:03:07.830 ********** 2026-03-09 00:55:35.431357 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:35.431362 | orchestrator | 2026-03-09 00:55:35.431369 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-09 00:55:35.431377 | orchestrator | Monday 09 March 2026 00:51:31 +0000 (0:00:00.878) 0:03:08.709 ********** 2026-03-09 00:55:35.431386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 00:55:35.431407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 00:55:35.431416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 00:55:35.431429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 00:55:35.431444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 00:55:35.431454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 00:55:35.431460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 00:55:35.431476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 00:55:35.431482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 00:55:35.431487 | orchestrator | 2026-03-09 00:55:35.431495 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-09 00:55:35.431500 | orchestrator | Monday 09 March 2026 00:51:35 +0000 (0:00:03.652) 0:03:12.362 ********** 2026-03-09 00:55:35.431506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-09 00:55:35.431518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 00:55:35.431523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 00:55:35.431528 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.431543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-09 00:55:35.431549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 00:55:35.431559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 00:55:35.431568 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.431574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-09 00:55:35.431579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 00:55:35.431584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 00:55:35.431589 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.431594 | orchestrator | 2026-03-09 00:55:35.431599 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-09 00:55:35.431613 | orchestrator | Monday 09 March 2026 00:51:36 +0000 (0:00:00.810) 0:03:13.172 ********** 2026-03-09 00:55:35.431618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-09 00:55:35.431624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-09 00:55:35.431629 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.431637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-09 00:55:35.431654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-09 00:55:35.431663 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.431671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-09 00:55:35.431678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-09 00:55:35.431686 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.431694 | orchestrator | 2026-03-09 00:55:35.431701 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-09 00:55:35.431726 | orchestrator | Monday 09 March 2026 00:51:36 +0000 (0:00:00.821) 0:03:13.994 ********** 2026-03-09 00:55:35.431734 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.431743 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.431748 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.431753 | orchestrator | 2026-03-09 00:55:35.431758 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-09 00:55:35.431763 | orchestrator | Monday 09 March 2026 00:51:38 +0000 (0:00:01.419) 0:03:15.414 ********** 2026-03-09 00:55:35.431768 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.431773 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.431778 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.431783 | orchestrator | 2026-03-09 00:55:35.431788 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-09 00:55:35.431793 | orchestrator | Monday 09 March 2026 00:51:40 +0000 (0:00:02.531) 0:03:17.946 ********** 2026-03-09 00:55:35.431798 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.431803 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.431808 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.431812 | orchestrator | 2026-03-09 00:55:35.431817 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-09 00:55:35.431822 | orchestrator | Monday 09 March 2026 00:51:41 +0000 (0:00:00.656) 0:03:18.602 ********** 2026-03-09 00:55:35.431827 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:35.431832 | orchestrator | 2026-03-09 00:55:35.431837 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-09 00:55:35.431842 | orchestrator | Monday 09 March 2026 00:51:42 +0000 (0:00:01.281) 0:03:19.884 ********** 2026-03-09 00:55:35.431847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 00:55:35.431870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.431880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 00:55:35.431886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.431892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 00:55:35.431897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.431906 | orchestrator | 2026-03-09 00:55:35.431911 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-09 00:55:35.431916 | orchestrator | Monday 09 March 2026 00:51:46 +0000 (0:00:04.094) 0:03:23.979 ********** 2026-03-09 00:55:35.431926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-09 00:55:35.431935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.431940 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.431945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-09 00:55:35.431950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.431955 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.431964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-09 00:55:35.431975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.431980 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.431984 | orchestrator | 2026-03-09 00:55:35.431989 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-09 00:55:35.431994 | orchestrator | Monday 09 March 2026 00:51:48 +0000 (0:00:01.204) 0:03:25.183 ********** 2026-03-09 00:55:35.432023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-09 00:55:35.432028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-09 00:55:35.432034 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.432038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-09 00:55:35.432043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-09 00:55:35.432048 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.432053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-09 00:55:35.432058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-09 00:55:35.432063 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.432068 | orchestrator | 2026-03-09 00:55:35.432073 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-09 00:55:35.432078 | orchestrator | Monday 09 March 2026 00:51:49 +0000 (0:00:01.001) 0:03:26.184 ********** 2026-03-09 00:55:35.432082 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.432087 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.432092 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.432097 | orchestrator | 2026-03-09 00:55:35.432102 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-09 00:55:35.432107 | orchestrator | Monday 09 March 2026 00:51:50 +0000 (0:00:01.529) 0:03:27.714 ********** 2026-03-09 00:55:35.432112 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.432116 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.432125 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.432130 | orchestrator | 2026-03-09 00:55:35.432134 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-09 00:55:35.432139 | orchestrator | Monday 09 March 2026 00:51:53 +0000 (0:00:02.588) 0:03:30.303 ********** 2026-03-09 00:55:35.432144 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:35.432149 | orchestrator | 2026-03-09 00:55:35.432154 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-09 00:55:35.432159 | orchestrator | Monday 09 March 2026 00:51:54 +0000 (0:00:01.429) 0:03:31.732 ********** 2026-03-09 00:55:35.432164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-09 00:55:35.432180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.432189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-09 00:55:35.432194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.432199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.432211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.432216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.432225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.432233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-09 00:55:35.432238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.432243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.432253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.432258 | orchestrator | 2026-03-09 00:55:35.432263 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-09 00:55:35.432268 | orchestrator | Monday 09 March 2026 00:51:58 +0000 (0:00:04.188) 0:03:35.920 ********** 2026-03-09 00:55:35.432277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-09 00:55:35.432283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.432291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.432296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.432301 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.432306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-09 00:55:35.432315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.432320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.432335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.432340 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.432348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-09 00:55:35.432354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.432363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.432368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.432373 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.432378 | orchestrator | 2026-03-09 00:55:35.432383 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-09 00:55:35.432388 | orchestrator | Monday 09 March 2026 00:51:59 +0000 (0:00:00.751) 0:03:36.672 ********** 2026-03-09 00:55:35.432393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-09 00:55:35.432398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-09 00:55:35.432403 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.432408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-09 00:55:35.432422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-09 00:55:35.432427 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.432433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-09 00:55:35.432438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-09 00:55:35.432443 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.432448 | orchestrator | 2026-03-09 00:55:35.432453 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-09 00:55:35.432457 | orchestrator | Monday 09 March 2026 00:52:01 +0000 (0:00:01.468) 0:03:38.140 ********** 2026-03-09 00:55:35.432463 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.432467 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.432472 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.432477 | orchestrator | 2026-03-09 00:55:35.432482 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-09 00:55:35.432487 | orchestrator | Monday 09 March 2026 00:52:02 +0000 (0:00:01.409) 0:03:39.549 ********** 2026-03-09 00:55:35.432498 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.432503 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.432509 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.432514 | orchestrator | 2026-03-09 00:55:35.432519 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-09 00:55:35.432523 | orchestrator | Monday 09 March 2026 00:52:04 +0000 (0:00:02.178) 0:03:41.728 ********** 2026-03-09 00:55:35.432528 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:35.432533 | orchestrator | 2026-03-09 00:55:35.432538 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-09 00:55:35.432543 | orchestrator | Monday 09 March 2026 00:52:06 +0000 (0:00:01.480) 0:03:43.209 ********** 2026-03-09 00:55:35.432548 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-09 00:55:35.432553 | orchestrator | 2026-03-09 00:55:35.432558 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-09 00:55:35.432563 | orchestrator | Monday 09 March 2026 00:52:08 +0000 (0:00:02.719) 0:03:45.929 ********** 2026-03-09 00:55:35.432569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:55:35.432586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-09 00:55:35.432591 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.432599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:55:35.432609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-09 00:55:35.432614 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.432624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:55:35.432629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-09 00:55:35.432640 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.432645 | orchestrator | 2026-03-09 00:55:35.432650 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-09 00:55:35.432658 | orchestrator | Monday 09 March 2026 00:52:11 +0000 (0:00:02.307) 0:03:48.237 ********** 2026-03-09 00:55:35.432663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:55:35.432669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-09 00:55:35.432674 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.432686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:55:35.432696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-09 00:55:35.432701 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.432706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:55:35.432721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-09 00:55:35.432730 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.432735 | orchestrator | 2026-03-09 00:55:35.432740 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-09 00:55:35.432745 | orchestrator | Monday 09 March 2026 00:52:13 +0000 (0:00:02.546) 0:03:50.784 ********** 2026-03-09 00:55:35.432753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-09 00:55:35.432759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-09 00:55:35.432764 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.432769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-09 00:55:35.432774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-09 00:55:35.432779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-09 00:55:35.432784 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.432798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-09 00:55:35.432807 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.432812 | orchestrator | 2026-03-09 00:55:35.432817 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-09 00:55:35.432822 | orchestrator | Monday 09 March 2026 00:52:17 +0000 (0:00:03.385) 0:03:54.170 ********** 2026-03-09 00:55:35.432827 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.432832 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.432837 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.432842 | orchestrator | 2026-03-09 00:55:35.432847 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-09 00:55:35.432852 | orchestrator | Monday 09 March 2026 00:52:18 +0000 (0:00:01.774) 0:03:55.944 ********** 2026-03-09 00:55:35.432857 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.432862 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.432866 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.432871 | orchestrator | 2026-03-09 00:55:35.432876 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-09 00:55:35.432881 | orchestrator | Monday 09 March 2026 00:52:20 +0000 (0:00:01.619) 0:03:57.564 ********** 2026-03-09 00:55:35.432886 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.432891 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.432895 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.432900 | orchestrator | 2026-03-09 00:55:35.432905 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-09 00:55:35.432910 | orchestrator | Monday 09 March 2026 00:52:20 +0000 (0:00:00.352) 0:03:57.916 ********** 2026-03-09 00:55:35.432918 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:35.432923 | orchestrator | 2026-03-09 00:55:35.432927 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-09 00:55:35.432933 | orchestrator | Monday 09 March 2026 00:52:22 +0000 (0:00:01.398) 0:03:59.315 ********** 2026-03-09 00:55:35.432938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-09 00:55:35.432943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-09 00:55:35.432949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-09 00:55:35.432958 | orchestrator | 2026-03-09 00:55:35.432963 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-09 00:55:35.432968 | orchestrator | Monday 09 March 2026 00:52:23 +0000 (0:00:01.619) 0:04:00.935 ********** 2026-03-09 00:55:35.432982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-09 00:55:35.432992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-09 00:55:35.433020 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.433026 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.433031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-09 00:55:35.433036 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.433041 | orchestrator | 2026-03-09 00:55:35.433046 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-09 00:55:35.433051 | orchestrator | Monday 09 March 2026 00:52:24 +0000 (0:00:00.466) 0:04:01.401 ********** 2026-03-09 00:55:35.433056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-09 00:55:35.433062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-09 00:55:35.433071 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.433077 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.433082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-09 00:55:35.433087 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.433092 | orchestrator | 2026-03-09 00:55:35.433096 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-09 00:55:35.433101 | orchestrator | Monday 09 March 2026 00:52:25 +0000 (0:00:00.891) 0:04:02.292 ********** 2026-03-09 00:55:35.433106 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.433111 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.433116 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.433121 | orchestrator | 2026-03-09 00:55:35.433126 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-09 00:55:35.433131 | orchestrator | Monday 09 March 2026 00:52:25 +0000 (0:00:00.490) 0:04:02.783 ********** 2026-03-09 00:55:35.433135 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.433140 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.433145 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.433150 | orchestrator | 2026-03-09 00:55:35.433155 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-09 00:55:35.433160 | orchestrator | Monday 09 March 2026 00:52:27 +0000 (0:00:01.439) 0:04:04.222 ********** 2026-03-09 00:55:35.433165 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.433170 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.433175 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.433180 | orchestrator | 2026-03-09 00:55:35.433185 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-09 00:55:35.433200 | orchestrator | Monday 09 March 2026 00:52:27 +0000 (0:00:00.347) 0:04:04.570 ********** 2026-03-09 00:55:35.433206 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:35.433211 | orchestrator | 2026-03-09 00:55:35.433215 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-09 00:55:35.433220 | orchestrator | Monday 09 March 2026 00:52:29 +0000 (0:00:01.734) 0:04:06.304 ********** 2026-03-09 00:55:35.433229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 00:55:35.433234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-09 00:55:35.433270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 00:55:35.433287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 00:55:35.433313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-09 00:55:35.433329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:55:35.433359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:55:35.433387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-09 00:55:35.433396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:55:35.433416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:55:35.433422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:55:35.433438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:55:35.433444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-09 00:55:35.433449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:55:35.433469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:55:35.433476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:55:35.433492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:55:35.433513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-09 00:55:35.433518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-09 00:55:35.433530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:55:35.433540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:55:35.433545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-09 00:55:35.433550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:55:35.433574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-09 00:55:35.433583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:55:35.433594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-09 00:55:35.433608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:55:35.433613 | orchestrator | 2026-03-09 00:55:35.433618 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-09 00:55:35.433623 | orchestrator | Monday 09 March 2026 00:52:33 +0000 (0:00:04.602) 0:04:10.907 ********** 2026-03-09 00:55:35.433634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 00:55:35.433640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-09 00:55:35.433667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:55:35.433683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:55:35.433688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 00:55:35.433698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:55:35.433706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-09 00:55:35.433729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:55:35.433739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-09 00:55:35.433765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-09 00:55:35.433771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:55:35.433782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:55:35.433791 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.433805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:55:35.433811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:55:35.433824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-09 00:55:35.433834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:55:35.433839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-09 00:55:35.433867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:55:35.433873 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.433878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 00:55:35.433883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-09 00:55:35.433921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:55:35.433932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:55:35.433937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:55:35.433955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-09 00:55:35.433968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-09 00:55:35.433973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.433979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-09 00:55:35.434059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-09 00:55:35.434072 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.434078 | orchestrator | 2026-03-09 00:55:35.434083 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-09 00:55:35.434088 | orchestrator | Monday 09 March 2026 00:52:35 +0000 (0:00:01.661) 0:04:12.569 ********** 2026-03-09 00:55:35.434093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-09 00:55:35.434098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-09 00:55:35.434104 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.434109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-09 00:55:35.434117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-09 00:55:35.434122 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.434128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-09 00:55:35.434133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-09 00:55:35.434138 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.434142 | orchestrator | 2026-03-09 00:55:35.434147 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-09 00:55:35.434152 | orchestrator | Monday 09 March 2026 00:52:37 +0000 (0:00:02.135) 0:04:14.704 ********** 2026-03-09 00:55:35.434157 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.434162 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.434167 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.434172 | orchestrator | 2026-03-09 00:55:35.434177 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-09 00:55:35.434182 | orchestrator | Monday 09 March 2026 00:52:38 +0000 (0:00:01.341) 0:04:16.046 ********** 2026-03-09 00:55:35.434187 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.434192 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.434197 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.434209 | orchestrator | 2026-03-09 00:55:35.434214 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-09 00:55:35.434218 | orchestrator | Monday 09 March 2026 00:52:41 +0000 (0:00:02.347) 0:04:18.393 ********** 2026-03-09 00:55:35.434223 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:35.434228 | orchestrator | 2026-03-09 00:55:35.434233 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-09 00:55:35.434238 | orchestrator | Monday 09 March 2026 00:52:42 +0000 (0:00:01.261) 0:04:19.654 ********** 2026-03-09 00:55:35.434242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 00:55:35.434259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 00:55:35.434267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 00:55:35.434273 | orchestrator | 2026-03-09 00:55:35.434277 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-09 00:55:35.434283 | orchestrator | Monday 09 March 2026 00:52:46 +0000 (0:00:04.057) 0:04:23.712 ********** 2026-03-09 00:55:35.434288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-09 00:55:35.434296 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.434302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-09 00:55:35.434307 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.434320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-09 00:55:35.434325 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.434330 | orchestrator | 2026-03-09 00:55:35.434335 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-09 00:55:35.434340 | orchestrator | Monday 09 March 2026 00:52:47 +0000 (0:00:00.583) 0:04:24.295 ********** 2026-03-09 00:55:35.434345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-09 00:55:35.434350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-09 00:55:35.434355 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.434359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-09 00:55:35.434367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-09 00:55:35.434372 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.434377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-09 00:55:35.434385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-09 00:55:35.434390 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.434395 | orchestrator | 2026-03-09 00:55:35.434400 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-09 00:55:35.434404 | orchestrator | Monday 09 March 2026 00:52:47 +0000 (0:00:00.807) 0:04:25.103 ********** 2026-03-09 00:55:35.434409 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.434414 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.434418 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.434423 | orchestrator | 2026-03-09 00:55:35.434428 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-09 00:55:35.434433 | orchestrator | Monday 09 March 2026 00:52:49 +0000 (0:00:01.917) 0:04:27.021 ********** 2026-03-09 00:55:35.434437 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.434442 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.434447 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.434452 | orchestrator | 2026-03-09 00:55:35.434456 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-09 00:55:35.434461 | orchestrator | Monday 09 March 2026 00:52:51 +0000 (0:00:01.787) 0:04:28.809 ********** 2026-03-09 00:55:35.434466 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:35.434471 | orchestrator | 2026-03-09 00:55:35.434475 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-09 00:55:35.434480 | orchestrator | Monday 09 March 2026 00:52:53 +0000 (0:00:01.646) 0:04:30.456 ********** 2026-03-09 00:55:35.434486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 00:55:35.434501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.434506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.434518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 00:55:35.434524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.434529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.434543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 00:55:35.434555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.434560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.434565 | orchestrator | 2026-03-09 00:55:35.434570 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-09 00:55:35.434575 | orchestrator | Monday 09 March 2026 00:52:58 +0000 (0:00:04.989) 0:04:35.445 ********** 2026-03-09 00:55:35.434580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-09 00:55:35.434585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.434599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.434604 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.434616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-09 00:55:35.434621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.434626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.434631 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.434636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-09 00:55:35.434649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.434658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.434665 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.434670 | orchestrator | 2026-03-09 00:55:35.434675 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-09 00:55:35.434680 | orchestrator | Monday 09 March 2026 00:52:59 +0000 (0:00:01.391) 0:04:36.837 ********** 2026-03-09 00:55:35.434685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-09 00:55:35.434691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-09 00:55:35.434695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-09 00:55:35.434700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-09 00:55:35.434705 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.434710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-09 00:55:35.434715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-09 00:55:35.434720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-09 00:55:35.434724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-09 00:55:35.434729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-09 00:55:35.434734 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.434739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-09 00:55:35.434744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-09 00:55:35.434752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-09 00:55:35.434757 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.434762 | orchestrator | 2026-03-09 00:55:35.434775 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-09 00:55:35.434781 | orchestrator | Monday 09 March 2026 00:53:00 +0000 (0:00:01.174) 0:04:38.011 ********** 2026-03-09 00:55:35.434785 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.434790 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.434794 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.434799 | orchestrator | 2026-03-09 00:55:35.434803 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-09 00:55:35.434808 | orchestrator | Monday 09 March 2026 00:53:02 +0000 (0:00:01.751) 0:04:39.763 ********** 2026-03-09 00:55:35.434814 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.434818 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.434823 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.434827 | orchestrator | 2026-03-09 00:55:35.434832 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-09 00:55:35.434837 | orchestrator | Monday 09 March 2026 00:53:04 +0000 (0:00:02.361) 0:04:42.124 ********** 2026-03-09 00:55:35.434841 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:35.434846 | orchestrator | 2026-03-09 00:55:35.434850 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-09 00:55:35.434855 | orchestrator | Monday 09 March 2026 00:53:07 +0000 (0:00:02.017) 0:04:44.142 ********** 2026-03-09 00:55:35.434860 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-09 00:55:35.434865 | orchestrator | 2026-03-09 00:55:35.434872 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-09 00:55:35.434877 | orchestrator | Monday 09 March 2026 00:53:07 +0000 (0:00:00.859) 0:04:45.001 ********** 2026-03-09 00:55:35.434882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-09 00:55:35.434887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-09 00:55:35.434892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-09 00:55:35.434897 | orchestrator | 2026-03-09 00:55:35.434902 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-09 00:55:35.434912 | orchestrator | Monday 09 March 2026 00:53:13 +0000 (0:00:05.326) 0:04:50.327 ********** 2026-03-09 00:55:35.434917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:55:35.434922 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.434926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:55:35.434931 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.434945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:55:35.434950 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.434955 | orchestrator | 2026-03-09 00:55:35.434959 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-09 00:55:35.434964 | orchestrator | Monday 09 March 2026 00:53:14 +0000 (0:00:01.220) 0:04:51.548 ********** 2026-03-09 00:55:35.434969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-09 00:55:35.434976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-09 00:55:35.434981 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.434986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-09 00:55:35.435026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-09 00:55:35.435033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-09 00:55:35.435038 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.435043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-09 00:55:35.435048 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.435058 | orchestrator | 2026-03-09 00:55:35.435062 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-09 00:55:35.435067 | orchestrator | Monday 09 March 2026 00:53:16 +0000 (0:00:01.633) 0:04:53.181 ********** 2026-03-09 00:55:35.435072 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.435076 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.435081 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.435086 | orchestrator | 2026-03-09 00:55:35.435090 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-09 00:55:35.435095 | orchestrator | Monday 09 March 2026 00:53:18 +0000 (0:00:02.807) 0:04:55.989 ********** 2026-03-09 00:55:35.435100 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.435104 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.435109 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.435113 | orchestrator | 2026-03-09 00:55:35.435118 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-09 00:55:35.435123 | orchestrator | Monday 09 March 2026 00:53:22 +0000 (0:00:03.352) 0:04:59.341 ********** 2026-03-09 00:55:35.435127 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-09 00:55:35.435132 | orchestrator | 2026-03-09 00:55:35.435137 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-09 00:55:35.435142 | orchestrator | Monday 09 March 2026 00:53:23 +0000 (0:00:01.484) 0:05:00.825 ********** 2026-03-09 00:55:35.435147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:55:35.435151 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.435166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:55:35.435172 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.435177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:55:35.435182 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.435187 | orchestrator | 2026-03-09 00:55:35.435191 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-09 00:55:35.435196 | orchestrator | Monday 09 March 2026 00:53:25 +0000 (0:00:01.370) 0:05:02.195 ********** 2026-03-09 00:55:35.435204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:55:35.435214 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.435218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:55:35.435223 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.435228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-09 00:55:35.435233 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.435237 | orchestrator | 2026-03-09 00:55:35.435243 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-09 00:55:35.435247 | orchestrator | Monday 09 March 2026 00:53:26 +0000 (0:00:01.498) 0:05:03.694 ********** 2026-03-09 00:55:35.435252 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.435257 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.435262 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.435266 | orchestrator | 2026-03-09 00:55:35.435271 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-09 00:55:35.435276 | orchestrator | Monday 09 March 2026 00:53:28 +0000 (0:00:02.142) 0:05:05.837 ********** 2026-03-09 00:55:35.435280 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:35.435285 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:35.435290 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:35.435295 | orchestrator | 2026-03-09 00:55:35.435299 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-09 00:55:35.435304 | orchestrator | Monday 09 March 2026 00:53:31 +0000 (0:00:02.698) 0:05:08.536 ********** 2026-03-09 00:55:35.435308 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:35.435313 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:35.435318 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:35.435323 | orchestrator | 2026-03-09 00:55:35.435327 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-09 00:55:35.435332 | orchestrator | Monday 09 March 2026 00:53:34 +0000 (0:00:03.596) 0:05:12.133 ********** 2026-03-09 00:55:35.435337 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-09 00:55:35.435341 | orchestrator | 2026-03-09 00:55:35.435346 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-09 00:55:35.435350 | orchestrator | Monday 09 March 2026 00:53:35 +0000 (0:00:00.930) 0:05:13.064 ********** 2026-03-09 00:55:35.435365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-09 00:55:35.435375 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.435380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-09 00:55:35.435388 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.435393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-09 00:55:35.435397 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.435402 | orchestrator | 2026-03-09 00:55:35.435407 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-09 00:55:35.435412 | orchestrator | Monday 09 March 2026 00:53:37 +0000 (0:00:01.510) 0:05:14.574 ********** 2026-03-09 00:55:35.435417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-09 00:55:35.435422 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.435427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-09 00:55:35.435432 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.435436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-09 00:55:35.435441 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.435446 | orchestrator | 2026-03-09 00:55:35.435451 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-09 00:55:35.435455 | orchestrator | Monday 09 March 2026 00:53:39 +0000 (0:00:01.901) 0:05:16.476 ********** 2026-03-09 00:55:35.435460 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.435465 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.435469 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.435474 | orchestrator | 2026-03-09 00:55:35.435479 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-09 00:55:35.435487 | orchestrator | Monday 09 March 2026 00:53:40 +0000 (0:00:01.341) 0:05:17.817 ********** 2026-03-09 00:55:35.435492 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:35.435506 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:35.435511 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:35.435516 | orchestrator | 2026-03-09 00:55:35.435521 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-09 00:55:35.435525 | orchestrator | Monday 09 March 2026 00:53:43 +0000 (0:00:02.932) 0:05:20.749 ********** 2026-03-09 00:55:35.435530 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:35.435535 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:35.435539 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:35.435544 | orchestrator | 2026-03-09 00:55:35.435548 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-09 00:55:35.435553 | orchestrator | Monday 09 March 2026 00:53:47 +0000 (0:00:03.610) 0:05:24.359 ********** 2026-03-09 00:55:35.435558 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:35.435562 | orchestrator | 2026-03-09 00:55:35.435566 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-09 00:55:35.435570 | orchestrator | Monday 09 March 2026 00:53:48 +0000 (0:00:01.722) 0:05:26.082 ********** 2026-03-09 00:55:35.435577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 00:55:35.435582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 00:55:35.435587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 00:55:35.435592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 00:55:35.435609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 00:55:35.435614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 00:55:35.435621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 00:55:35.435626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.435630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 00:55:35.435635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.435639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 00:55:35.435655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 00:55:35.435659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 00:55:35.435667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 00:55:35.435671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.435676 | orchestrator | 2026-03-09 00:55:35.435680 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-09 00:55:35.435684 | orchestrator | Monday 09 March 2026 00:53:52 +0000 (0:00:03.978) 0:05:30.061 ********** 2026-03-09 00:55:35.435688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 00:55:35.435696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 00:55:35.435708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 00:55:35.435713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 00:55:35.435718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.435722 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.435726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 00:55:35.435731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 00:55:35.435740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 00:55:35.435752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 00:55:35.435804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.435817 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.435824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 00:55:35.435829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 00:55:35.435833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 00:55:35.435841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 00:55:35.435856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 00:55:35.435861 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.435866 | orchestrator | 2026-03-09 00:55:35.435870 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-09 00:55:35.435874 | orchestrator | Monday 09 March 2026 00:53:53 +0000 (0:00:00.789) 0:05:30.850 ********** 2026-03-09 00:55:35.435879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-09 00:55:35.435883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-09 00:55:35.435888 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.435893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-09 00:55:35.435899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-09 00:55:35.435904 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.435908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-09 00:55:35.435912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-09 00:55:35.435916 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.435921 | orchestrator | 2026-03-09 00:55:35.435925 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-09 00:55:35.435929 | orchestrator | Monday 09 March 2026 00:53:55 +0000 (0:00:01.656) 0:05:32.507 ********** 2026-03-09 00:55:35.435933 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.435938 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.435942 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.435950 | orchestrator | 2026-03-09 00:55:35.435954 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-09 00:55:35.435958 | orchestrator | Monday 09 March 2026 00:53:56 +0000 (0:00:01.466) 0:05:33.974 ********** 2026-03-09 00:55:35.435962 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.435967 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.435971 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.435975 | orchestrator | 2026-03-09 00:55:35.435979 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-09 00:55:35.435984 | orchestrator | Monday 09 March 2026 00:53:59 +0000 (0:00:02.266) 0:05:36.240 ********** 2026-03-09 00:55:35.435988 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:35.435992 | orchestrator | 2026-03-09 00:55:35.436012 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-09 00:55:35.436017 | orchestrator | Monday 09 March 2026 00:54:00 +0000 (0:00:01.838) 0:05:38.079 ********** 2026-03-09 00:55:35.436022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 00:55:35.436035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 00:55:35.436041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 00:55:35.436049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 00:55:35.436058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 00:55:35.436075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 00:55:35.436080 | orchestrator | 2026-03-09 00:55:35.436085 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-09 00:55:35.436089 | orchestrator | Monday 09 March 2026 00:54:06 +0000 (0:00:05.620) 0:05:43.700 ********** 2026-03-09 00:55:35.436093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-09 00:55:35.436101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-09 00:55:35.436109 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.436114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-09 00:55:35.436118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-09 00:55:35.436131 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.436136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-09 00:55:35.436143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-09 00:55:35.436152 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.436156 | orchestrator | 2026-03-09 00:55:35.436161 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-09 00:55:35.436165 | orchestrator | Monday 09 March 2026 00:54:07 +0000 (0:00:00.733) 0:05:44.434 ********** 2026-03-09 00:55:35.436170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-09 00:55:35.436174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-09 00:55:35.436179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-09 00:55:35.436183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-09 00:55:35.436188 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.436192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-09 00:55:35.436197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-09 00:55:35.436202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-09 00:55:35.436206 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.436211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-09 00:55:35.436224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-09 00:55:35.436229 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.436233 | orchestrator | 2026-03-09 00:55:35.436237 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-09 00:55:35.436241 | orchestrator | Monday 09 March 2026 00:54:08 +0000 (0:00:01.014) 0:05:45.448 ********** 2026-03-09 00:55:35.436246 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.436250 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.436258 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.436262 | orchestrator | 2026-03-09 00:55:35.436266 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-09 00:55:35.436271 | orchestrator | Monday 09 March 2026 00:54:09 +0000 (0:00:00.957) 0:05:46.406 ********** 2026-03-09 00:55:35.436275 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.436279 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.436283 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.436288 | orchestrator | 2026-03-09 00:55:35.436292 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-09 00:55:35.436297 | orchestrator | Monday 09 March 2026 00:54:10 +0000 (0:00:01.607) 0:05:48.013 ********** 2026-03-09 00:55:35.436301 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:35.436305 | orchestrator | 2026-03-09 00:55:35.436312 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-09 00:55:35.436317 | orchestrator | Monday 09 March 2026 00:54:12 +0000 (0:00:01.674) 0:05:49.688 ********** 2026-03-09 00:55:35.436321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-09 00:55:35.436326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 00:55:35.436331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:35.436335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:35.436348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-09 00:55:35.436356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 00:55:35.436365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 00:55:35.436370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:35.436374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:35.436379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 00:55:35.436383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-09 00:55:35.436388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 00:55:35.436404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:35.436408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:35.436415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 00:55:35.436420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-09 00:55:35.436425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-09 00:55:35.436429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:35.436441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:35.436449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-09 00:55:35.436454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 00:55:35.436458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-09 00:55:35.436463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:35.436467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:35.436476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 00:55:35.436486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-09 00:55:35.436493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-09 00:55:35.436498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:35.436502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:35.436507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 00:55:35.436515 | orchestrator | 2026-03-09 00:55:35.436519 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-09 00:55:35.436524 | orchestrator | Monday 09 March 2026 00:54:17 +0000 (0:00:05.432) 0:05:55.121 ********** 2026-03-09 00:55:35.436530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-09 00:55:35.436536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 00:55:35.436542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:35.436547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:35.436552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 00:55:35.436556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-09 00:55:35.436566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-09 00:55:35.436574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-09 00:55:35.436581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 00:55:35.436585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:35.436590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:35.436594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:35.436599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:35.436607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 00:55:35.436611 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.436618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 00:55:35.436625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-09 00:55:35.436630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-09 00:55:35.436635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:35.436643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:35.436647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 00:55:35.436652 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.436660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-09 00:55:35.436665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 00:55:35.436672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:35.436677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:35.436681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 00:55:35.436690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-09 00:55:35.436698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-09 00:55:35.436703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:35.436710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 00:55:35.436715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 00:55:35.436720 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.436724 | orchestrator | 2026-03-09 00:55:35.436728 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-09 00:55:35.436732 | orchestrator | Monday 09 March 2026 00:54:18 +0000 (0:00:00.936) 0:05:56.058 ********** 2026-03-09 00:55:35.436737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-09 00:55:35.436741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-09 00:55:35.436749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-09 00:55:35.436754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-09 00:55:35.436759 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.436763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-09 00:55:35.436768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-09 00:55:35.436772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-09 00:55:35.436776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-09 00:55:35.436781 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.436787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-09 00:55:35.436792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-09 00:55:35.436796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-09 00:55:35.436801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-09 00:55:35.436805 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.436809 | orchestrator | 2026-03-09 00:55:35.436814 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-09 00:55:35.436820 | orchestrator | Monday 09 March 2026 00:54:19 +0000 (0:00:01.045) 0:05:57.103 ********** 2026-03-09 00:55:35.436824 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.436829 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.436833 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.436837 | orchestrator | 2026-03-09 00:55:35.436842 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-09 00:55:35.436846 | orchestrator | Monday 09 March 2026 00:54:20 +0000 (0:00:00.516) 0:05:57.619 ********** 2026-03-09 00:55:35.436855 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.436859 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.436864 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.436868 | orchestrator | 2026-03-09 00:55:35.436872 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-09 00:55:35.436876 | orchestrator | Monday 09 March 2026 00:54:22 +0000 (0:00:01.556) 0:05:59.176 ********** 2026-03-09 00:55:35.436880 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:35.436884 | orchestrator | 2026-03-09 00:55:35.436889 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-09 00:55:35.436893 | orchestrator | Monday 09 March 2026 00:54:23 +0000 (0:00:01.882) 0:06:01.059 ********** 2026-03-09 00:55:35.436897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:55:35.436902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:55:35.436910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-09 00:55:35.436915 | orchestrator | 2026-03-09 00:55:35.436919 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-09 00:55:35.436923 | orchestrator | Monday 09 March 2026 00:54:26 +0000 (0:00:02.719) 0:06:03.779 ********** 2026-03-09 00:55:35.436930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-09 00:55:35.436939 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.436943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-09 00:55:35.436948 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.436952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-09 00:55:35.436956 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.436961 | orchestrator | 2026-03-09 00:55:35.436965 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-09 00:55:35.436972 | orchestrator | Monday 09 March 2026 00:54:27 +0000 (0:00:00.793) 0:06:04.573 ********** 2026-03-09 00:55:35.436977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-09 00:55:35.436981 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.436985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-09 00:55:35.436990 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.436994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-09 00:55:35.437014 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.437019 | orchestrator | 2026-03-09 00:55:35.437023 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-09 00:55:35.437027 | orchestrator | Monday 09 March 2026 00:54:28 +0000 (0:00:00.774) 0:06:05.347 ********** 2026-03-09 00:55:35.437032 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.437036 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.437040 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.437044 | orchestrator | 2026-03-09 00:55:35.437048 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-09 00:55:35.437053 | orchestrator | Monday 09 March 2026 00:54:28 +0000 (0:00:00.497) 0:06:05.844 ********** 2026-03-09 00:55:35.437059 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.437063 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.437068 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.437072 | orchestrator | 2026-03-09 00:55:35.437076 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-09 00:55:35.437080 | orchestrator | Monday 09 March 2026 00:54:30 +0000 (0:00:01.430) 0:06:07.274 ********** 2026-03-09 00:55:35.437085 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:55:35.437089 | orchestrator | 2026-03-09 00:55:35.437093 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-09 00:55:35.437097 | orchestrator | Monday 09 March 2026 00:54:32 +0000 (0:00:01.956) 0:06:09.231 ********** 2026-03-09 00:55:35.437102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-09 00:55:35.437108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-09 00:55:35.437115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-09 00:55:35.437126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-09 00:55:35.437131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-09 00:55:35.437135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-09 00:55:35.437140 | orchestrator | 2026-03-09 00:55:35.437144 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-09 00:55:35.437149 | orchestrator | Monday 09 March 2026 00:54:38 +0000 (0:00:06.830) 0:06:16.061 ********** 2026-03-09 00:55:35.437156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-09 00:55:35.437163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-09 00:55:35.437168 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.437175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-09 00:55:35.437179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-09 00:55:35.437184 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.437188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-09 00:55:35.437199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-09 00:55:35.437204 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.437208 | orchestrator | 2026-03-09 00:55:35.437212 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-09 00:55:35.437217 | orchestrator | Monday 09 March 2026 00:54:39 +0000 (0:00:00.788) 0:06:16.850 ********** 2026-03-09 00:55:35.437221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-09 00:55:35.437228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-09 00:55:35.437233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-09 00:55:35.437237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-09 00:55:35.437242 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.437246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-09 00:55:35.437250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-09 00:55:35.437254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-09 00:55:35.437258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-09 00:55:35.437263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-09 00:55:35.437269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-09 00:55:35.437279 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.437283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-09 00:55:35.437288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-09 00:55:35.437292 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.437296 | orchestrator | 2026-03-09 00:55:35.437300 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-09 00:55:35.437305 | orchestrator | Monday 09 March 2026 00:54:41 +0000 (0:00:01.837) 0:06:18.687 ********** 2026-03-09 00:55:35.437309 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.437313 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.437317 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.437321 | orchestrator | 2026-03-09 00:55:35.437326 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-09 00:55:35.437333 | orchestrator | Monday 09 March 2026 00:54:43 +0000 (0:00:01.543) 0:06:20.230 ********** 2026-03-09 00:55:35.437337 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.437341 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.437345 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.437350 | orchestrator | 2026-03-09 00:55:35.437354 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-09 00:55:35.437358 | orchestrator | Monday 09 March 2026 00:54:45 +0000 (0:00:02.469) 0:06:22.700 ********** 2026-03-09 00:55:35.437362 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.437366 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.437370 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.437375 | orchestrator | 2026-03-09 00:55:35.437379 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-09 00:55:35.437383 | orchestrator | Monday 09 March 2026 00:54:45 +0000 (0:00:00.353) 0:06:23.053 ********** 2026-03-09 00:55:35.437388 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.437392 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.437396 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.437400 | orchestrator | 2026-03-09 00:55:35.437405 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-09 00:55:35.437409 | orchestrator | Monday 09 March 2026 00:54:46 +0000 (0:00:00.327) 0:06:23.381 ********** 2026-03-09 00:55:35.437413 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.437417 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.437422 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.437426 | orchestrator | 2026-03-09 00:55:35.437430 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-09 00:55:35.437437 | orchestrator | Monday 09 March 2026 00:54:46 +0000 (0:00:00.728) 0:06:24.109 ********** 2026-03-09 00:55:35.437441 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.437445 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.437450 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.437454 | orchestrator | 2026-03-09 00:55:35.437458 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-09 00:55:35.437462 | orchestrator | Monday 09 March 2026 00:54:47 +0000 (0:00:00.359) 0:06:24.469 ********** 2026-03-09 00:55:35.437467 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.437471 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.437475 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.437479 | orchestrator | 2026-03-09 00:55:35.437483 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-09 00:55:35.437487 | orchestrator | Monday 09 March 2026 00:54:47 +0000 (0:00:00.344) 0:06:24.814 ********** 2026-03-09 00:55:35.437491 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.437499 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.437503 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.437508 | orchestrator | 2026-03-09 00:55:35.437512 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-09 00:55:35.437516 | orchestrator | Monday 09 March 2026 00:54:48 +0000 (0:00:00.992) 0:06:25.806 ********** 2026-03-09 00:55:35.437520 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:35.437525 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:35.437529 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:35.437533 | orchestrator | 2026-03-09 00:55:35.437537 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-09 00:55:35.437542 | orchestrator | Monday 09 March 2026 00:54:49 +0000 (0:00:00.727) 0:06:26.533 ********** 2026-03-09 00:55:35.437546 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:35.437550 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:35.437554 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:35.437559 | orchestrator | 2026-03-09 00:55:35.437563 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-09 00:55:35.437567 | orchestrator | Monday 09 March 2026 00:54:49 +0000 (0:00:00.386) 0:06:26.919 ********** 2026-03-09 00:55:35.437571 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:35.437575 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:35.437580 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:35.437584 | orchestrator | 2026-03-09 00:55:35.437588 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-09 00:55:35.437592 | orchestrator | Monday 09 March 2026 00:54:50 +0000 (0:00:00.938) 0:06:27.858 ********** 2026-03-09 00:55:35.437596 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:35.437601 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:35.437605 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:35.437609 | orchestrator | 2026-03-09 00:55:35.437613 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-09 00:55:35.437617 | orchestrator | Monday 09 March 2026 00:54:52 +0000 (0:00:01.353) 0:06:29.212 ********** 2026-03-09 00:55:35.437621 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:35.437625 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:35.437630 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:35.437634 | orchestrator | 2026-03-09 00:55:35.437638 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-09 00:55:35.437642 | orchestrator | Monday 09 March 2026 00:54:53 +0000 (0:00:01.061) 0:06:30.273 ********** 2026-03-09 00:55:35.437647 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.437651 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.437655 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.437660 | orchestrator | 2026-03-09 00:55:35.437664 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-09 00:55:35.437668 | orchestrator | Monday 09 March 2026 00:54:58 +0000 (0:00:05.119) 0:06:35.393 ********** 2026-03-09 00:55:35.437672 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:35.437676 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:35.437681 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:35.437685 | orchestrator | 2026-03-09 00:55:35.437689 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-09 00:55:35.437693 | orchestrator | Monday 09 March 2026 00:55:01 +0000 (0:00:02.788) 0:06:38.182 ********** 2026-03-09 00:55:35.437698 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.437702 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.437706 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.437710 | orchestrator | 2026-03-09 00:55:35.437715 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-09 00:55:35.437719 | orchestrator | Monday 09 March 2026 00:55:16 +0000 (0:00:15.813) 0:06:53.995 ********** 2026-03-09 00:55:35.437723 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:35.437730 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:35.437735 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:35.437743 | orchestrator | 2026-03-09 00:55:35.437747 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-09 00:55:35.437751 | orchestrator | Monday 09 March 2026 00:55:17 +0000 (0:00:00.843) 0:06:54.839 ********** 2026-03-09 00:55:35.437755 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:55:35.437759 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:55:35.437764 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:55:35.437768 | orchestrator | 2026-03-09 00:55:35.437772 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-09 00:55:35.437776 | orchestrator | Monday 09 March 2026 00:55:28 +0000 (0:00:10.323) 0:07:05.162 ********** 2026-03-09 00:55:35.437780 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.437784 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.437789 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.437793 | orchestrator | 2026-03-09 00:55:35.437797 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-09 00:55:35.437801 | orchestrator | Monday 09 March 2026 00:55:28 +0000 (0:00:00.388) 0:07:05.550 ********** 2026-03-09 00:55:35.437806 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.437810 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.437814 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.437818 | orchestrator | 2026-03-09 00:55:35.437823 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-09 00:55:35.437827 | orchestrator | Monday 09 March 2026 00:55:29 +0000 (0:00:00.747) 0:07:06.297 ********** 2026-03-09 00:55:35.437831 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.437838 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.437842 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.437846 | orchestrator | 2026-03-09 00:55:35.437851 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-09 00:55:35.437855 | orchestrator | Monday 09 March 2026 00:55:29 +0000 (0:00:00.371) 0:07:06.669 ********** 2026-03-09 00:55:35.437859 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.437863 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.437867 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.437872 | orchestrator | 2026-03-09 00:55:35.437876 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-09 00:55:35.437880 | orchestrator | Monday 09 March 2026 00:55:29 +0000 (0:00:00.353) 0:07:07.023 ********** 2026-03-09 00:55:35.437885 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.437889 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.437893 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.437897 | orchestrator | 2026-03-09 00:55:35.437901 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-09 00:55:35.437905 | orchestrator | Monday 09 March 2026 00:55:30 +0000 (0:00:00.410) 0:07:07.433 ********** 2026-03-09 00:55:35.437910 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:55:35.437914 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:55:35.437918 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:55:35.437922 | orchestrator | 2026-03-09 00:55:35.437926 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-09 00:55:35.437931 | orchestrator | Monday 09 March 2026 00:55:30 +0000 (0:00:00.406) 0:07:07.840 ********** 2026-03-09 00:55:35.437935 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:35.437939 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:35.437943 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:35.437947 | orchestrator | 2026-03-09 00:55:35.437951 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-09 00:55:35.437956 | orchestrator | Monday 09 March 2026 00:55:32 +0000 (0:00:01.596) 0:07:09.436 ********** 2026-03-09 00:55:35.437960 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:55:35.437964 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:55:35.437968 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:55:35.437972 | orchestrator | 2026-03-09 00:55:35.437976 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:55:35.437985 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-09 00:55:35.437990 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-09 00:55:35.437994 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-09 00:55:35.438049 | orchestrator | 2026-03-09 00:55:35.438054 | orchestrator | 2026-03-09 00:55:35.438058 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:55:35.438063 | orchestrator | Monday 09 March 2026 00:55:33 +0000 (0:00:00.855) 0:07:10.292 ********** 2026-03-09 00:55:35.438067 | orchestrator | =============================================================================== 2026-03-09 00:55:35.438071 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 15.81s 2026-03-09 00:55:35.438076 | orchestrator | loadbalancer : Start backup keepalived container ----------------------- 10.32s 2026-03-09 00:55:35.438080 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 6.94s 2026-03-09 00:55:35.438084 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.83s 2026-03-09 00:55:35.438089 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 6.66s 2026-03-09 00:55:35.438093 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 6.41s 2026-03-09 00:55:35.438097 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.62s 2026-03-09 00:55:35.438101 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.43s 2026-03-09 00:55:35.438105 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 5.37s 2026-03-09 00:55:35.438113 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 5.35s 2026-03-09 00:55:35.438117 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.33s 2026-03-09 00:55:35.438121 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.29s 2026-03-09 00:55:35.438126 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.12s 2026-03-09 00:55:35.438130 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.08s 2026-03-09 00:55:35.438134 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.08s 2026-03-09 00:55:35.438138 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 5.04s 2026-03-09 00:55:35.438142 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.99s 2026-03-09 00:55:35.438146 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.65s 2026-03-09 00:55:35.438151 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.60s 2026-03-09 00:55:35.438155 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.28s 2026-03-09 00:55:38.456052 | orchestrator | 2026-03-09 00:55:38 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:55:38.457828 | orchestrator | 2026-03-09 00:55:38 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:55:38.459450 | orchestrator | 2026-03-09 00:55:38 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:55:38.459489 | orchestrator | 2026-03-09 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:41.491375 | orchestrator | 2026-03-09 00:55:41 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:55:41.492294 | orchestrator | 2026-03-09 00:55:41 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:55:41.495534 | orchestrator | 2026-03-09 00:55:41 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:55:41.495572 | orchestrator | 2026-03-09 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:44.521979 | orchestrator | 2026-03-09 00:55:44 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:55:44.522125 | orchestrator | 2026-03-09 00:55:44 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:55:44.524742 | orchestrator | 2026-03-09 00:55:44 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:55:44.524837 | orchestrator | 2026-03-09 00:55:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:47.573511 | orchestrator | 2026-03-09 00:55:47 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:55:47.573612 | orchestrator | 2026-03-09 00:55:47 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:55:47.573627 | orchestrator | 2026-03-09 00:55:47 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:55:47.573641 | orchestrator | 2026-03-09 00:55:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:50.621173 | orchestrator | 2026-03-09 00:55:50 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:55:50.622717 | orchestrator | 2026-03-09 00:55:50 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:55:50.623608 | orchestrator | 2026-03-09 00:55:50 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:55:50.623653 | orchestrator | 2026-03-09 00:55:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:53.670077 | orchestrator | 2026-03-09 00:55:53 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:55:53.671113 | orchestrator | 2026-03-09 00:55:53 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:55:53.675780 | orchestrator | 2026-03-09 00:55:53 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:55:53.675847 | orchestrator | 2026-03-09 00:55:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:56.704070 | orchestrator | 2026-03-09 00:55:56 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:55:56.705134 | orchestrator | 2026-03-09 00:55:56 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:55:56.707931 | orchestrator | 2026-03-09 00:55:56 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:55:56.708161 | orchestrator | 2026-03-09 00:55:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:55:59.750153 | orchestrator | 2026-03-09 00:55:59 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:55:59.750277 | orchestrator | 2026-03-09 00:55:59 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:55:59.750747 | orchestrator | 2026-03-09 00:55:59 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:55:59.750759 | orchestrator | 2026-03-09 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:02.794186 | orchestrator | 2026-03-09 00:56:02 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:56:02.794431 | orchestrator | 2026-03-09 00:56:02 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:56:02.795507 | orchestrator | 2026-03-09 00:56:02 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:56:02.795565 | orchestrator | 2026-03-09 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:05.873655 | orchestrator | 2026-03-09 00:56:05 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:56:05.873736 | orchestrator | 2026-03-09 00:56:05 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:56:05.874426 | orchestrator | 2026-03-09 00:56:05 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:56:05.874482 | orchestrator | 2026-03-09 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:08.918451 | orchestrator | 2026-03-09 00:56:08 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:56:08.920302 | orchestrator | 2026-03-09 00:56:08 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:56:08.922524 | orchestrator | 2026-03-09 00:56:08 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:56:08.922571 | orchestrator | 2026-03-09 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:11.959027 | orchestrator | 2026-03-09 00:56:11 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:56:11.959440 | orchestrator | 2026-03-09 00:56:11 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:56:11.960245 | orchestrator | 2026-03-09 00:56:11 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:56:11.960278 | orchestrator | 2026-03-09 00:56:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:15.021293 | orchestrator | 2026-03-09 00:56:15 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:56:15.025222 | orchestrator | 2026-03-09 00:56:15 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:56:15.028734 | orchestrator | 2026-03-09 00:56:15 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:56:15.028810 | orchestrator | 2026-03-09 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:18.077652 | orchestrator | 2026-03-09 00:56:18 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:56:18.078096 | orchestrator | 2026-03-09 00:56:18 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:56:18.079752 | orchestrator | 2026-03-09 00:56:18 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:56:18.079792 | orchestrator | 2026-03-09 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:21.127322 | orchestrator | 2026-03-09 00:56:21 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:56:21.128670 | orchestrator | 2026-03-09 00:56:21 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:56:21.129519 | orchestrator | 2026-03-09 00:56:21 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:56:21.129567 | orchestrator | 2026-03-09 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:24.168380 | orchestrator | 2026-03-09 00:56:24 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:56:24.170447 | orchestrator | 2026-03-09 00:56:24 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:56:24.173370 | orchestrator | 2026-03-09 00:56:24 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:56:24.173666 | orchestrator | 2026-03-09 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:27.207777 | orchestrator | 2026-03-09 00:56:27 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:56:27.209835 | orchestrator | 2026-03-09 00:56:27 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:56:27.212037 | orchestrator | 2026-03-09 00:56:27 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:56:27.212068 | orchestrator | 2026-03-09 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:30.245875 | orchestrator | 2026-03-09 00:56:30 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:56:30.246424 | orchestrator | 2026-03-09 00:56:30 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:56:30.247975 | orchestrator | 2026-03-09 00:56:30 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:56:30.248016 | orchestrator | 2026-03-09 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:33.300428 | orchestrator | 2026-03-09 00:56:33 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:56:33.302613 | orchestrator | 2026-03-09 00:56:33 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:56:33.304914 | orchestrator | 2026-03-09 00:56:33 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:56:33.304994 | orchestrator | 2026-03-09 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:36.349507 | orchestrator | 2026-03-09 00:56:36 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:56:36.350885 | orchestrator | 2026-03-09 00:56:36 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:56:36.353710 | orchestrator | 2026-03-09 00:56:36 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:56:36.353850 | orchestrator | 2026-03-09 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:39.409169 | orchestrator | 2026-03-09 00:56:39 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:56:39.411767 | orchestrator | 2026-03-09 00:56:39 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:56:39.414405 | orchestrator | 2026-03-09 00:56:39 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:56:39.414494 | orchestrator | 2026-03-09 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:42.465245 | orchestrator | 2026-03-09 00:56:42 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:56:42.466198 | orchestrator | 2026-03-09 00:56:42 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:56:42.468649 | orchestrator | 2026-03-09 00:56:42 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:56:42.468899 | orchestrator | 2026-03-09 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:45.508388 | orchestrator | 2026-03-09 00:56:45 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:56:45.508486 | orchestrator | 2026-03-09 00:56:45 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:56:45.508501 | orchestrator | 2026-03-09 00:56:45 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:56:45.508513 | orchestrator | 2026-03-09 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:48.584306 | orchestrator | 2026-03-09 00:56:48 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:56:48.586595 | orchestrator | 2026-03-09 00:56:48 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:56:48.587756 | orchestrator | 2026-03-09 00:56:48 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:56:48.587806 | orchestrator | 2026-03-09 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:51.640486 | orchestrator | 2026-03-09 00:56:51 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:56:51.643005 | orchestrator | 2026-03-09 00:56:51 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:56:51.647184 | orchestrator | 2026-03-09 00:56:51 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:56:51.647298 | orchestrator | 2026-03-09 00:56:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:54.688744 | orchestrator | 2026-03-09 00:56:54 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:56:54.690172 | orchestrator | 2026-03-09 00:56:54 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:56:54.692016 | orchestrator | 2026-03-09 00:56:54 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:56:54.692117 | orchestrator | 2026-03-09 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:56:57.738150 | orchestrator | 2026-03-09 00:56:57 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:56:57.740855 | orchestrator | 2026-03-09 00:56:57 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:56:57.746308 | orchestrator | 2026-03-09 00:56:57 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:56:57.746623 | orchestrator | 2026-03-09 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:00.798076 | orchestrator | 2026-03-09 00:57:00 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:57:00.799235 | orchestrator | 2026-03-09 00:57:00 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:57:00.801295 | orchestrator | 2026-03-09 00:57:00 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:57:00.801333 | orchestrator | 2026-03-09 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:03.867635 | orchestrator | 2026-03-09 00:57:03 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:57:03.869393 | orchestrator | 2026-03-09 00:57:03 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:57:03.872661 | orchestrator | 2026-03-09 00:57:03 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:57:03.872707 | orchestrator | 2026-03-09 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:06.926176 | orchestrator | 2026-03-09 00:57:06 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:57:06.927810 | orchestrator | 2026-03-09 00:57:06 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:57:06.931690 | orchestrator | 2026-03-09 00:57:06 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:57:06.931740 | orchestrator | 2026-03-09 00:57:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:09.983521 | orchestrator | 2026-03-09 00:57:09 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:57:09.986514 | orchestrator | 2026-03-09 00:57:09 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:57:09.990252 | orchestrator | 2026-03-09 00:57:09 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:57:09.991065 | orchestrator | 2026-03-09 00:57:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:13.048410 | orchestrator | 2026-03-09 00:57:13 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:57:13.050278 | orchestrator | 2026-03-09 00:57:13 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:57:13.052547 | orchestrator | 2026-03-09 00:57:13 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:57:13.052671 | orchestrator | 2026-03-09 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:16.111597 | orchestrator | 2026-03-09 00:57:16 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:57:16.113641 | orchestrator | 2026-03-09 00:57:16 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:57:16.115425 | orchestrator | 2026-03-09 00:57:16 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:57:16.115558 | orchestrator | 2026-03-09 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:19.153934 | orchestrator | 2026-03-09 00:57:19 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:57:19.156846 | orchestrator | 2026-03-09 00:57:19 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:57:19.161778 | orchestrator | 2026-03-09 00:57:19 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:57:19.161845 | orchestrator | 2026-03-09 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:22.216013 | orchestrator | 2026-03-09 00:57:22 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:57:22.217112 | orchestrator | 2026-03-09 00:57:22 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:57:22.220842 | orchestrator | 2026-03-09 00:57:22 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:57:22.220959 | orchestrator | 2026-03-09 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:25.270269 | orchestrator | 2026-03-09 00:57:25 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:57:25.271192 | orchestrator | 2026-03-09 00:57:25 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:57:25.273905 | orchestrator | 2026-03-09 00:57:25 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:57:25.273978 | orchestrator | 2026-03-09 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:28.323145 | orchestrator | 2026-03-09 00:57:28 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:57:28.323309 | orchestrator | 2026-03-09 00:57:28 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:57:28.325227 | orchestrator | 2026-03-09 00:57:28 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:57:28.325372 | orchestrator | 2026-03-09 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:31.378946 | orchestrator | 2026-03-09 00:57:31 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:57:31.379052 | orchestrator | 2026-03-09 00:57:31 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state STARTED 2026-03-09 00:57:31.381580 | orchestrator | 2026-03-09 00:57:31 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:57:31.381653 | orchestrator | 2026-03-09 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:34.440303 | orchestrator | 2026-03-09 00:57:34 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:57:34.447594 | orchestrator | 2026-03-09 00:57:34.447740 | orchestrator | 2026-03-09 00:57:34 | INFO  | Task 9cfa2bbb-ee47-4207-9e4e-41388a0d079f is in state SUCCESS 2026-03-09 00:57:34.449205 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-09 00:57:34.449258 | orchestrator | 2.16.14 2026-03-09 00:57:34.449275 | orchestrator | 2026-03-09 00:57:34.449291 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-09 00:57:34.449305 | orchestrator | 2026-03-09 00:57:34.449319 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-09 00:57:34.449333 | orchestrator | Monday 09 March 2026 00:45:51 +0000 (0:00:00.695) 0:00:00.695 ********** 2026-03-09 00:57:34.449348 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.449363 | orchestrator | 2026-03-09 00:57:34.449377 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-09 00:57:34.449390 | orchestrator | Monday 09 March 2026 00:45:52 +0000 (0:00:01.051) 0:00:01.747 ********** 2026-03-09 00:57:34.450263 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.450286 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.450301 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.450316 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.450330 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.450343 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.450356 | orchestrator | 2026-03-09 00:57:34.450370 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-09 00:57:34.450384 | orchestrator | Monday 09 March 2026 00:45:53 +0000 (0:00:01.707) 0:00:03.454 ********** 2026-03-09 00:57:34.450397 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.450412 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.450425 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.450440 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.450456 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.450471 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.450485 | orchestrator | 2026-03-09 00:57:34.450500 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-09 00:57:34.450513 | orchestrator | Monday 09 March 2026 00:45:54 +0000 (0:00:00.562) 0:00:04.016 ********** 2026-03-09 00:57:34.450528 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.450540 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.450553 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.450567 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.450580 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.450594 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.451972 | orchestrator | 2026-03-09 00:57:34.452008 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-09 00:57:34.452021 | orchestrator | Monday 09 March 2026 00:45:55 +0000 (0:00:01.077) 0:00:05.094 ********** 2026-03-09 00:57:34.452032 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.452043 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.452054 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.452065 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.452077 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.452090 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.452101 | orchestrator | 2026-03-09 00:57:34.452113 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-09 00:57:34.452124 | orchestrator | Monday 09 March 2026 00:45:56 +0000 (0:00:00.824) 0:00:05.918 ********** 2026-03-09 00:57:34.452161 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.452174 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.452185 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.452196 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.452207 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.452218 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.452229 | orchestrator | 2026-03-09 00:57:34.452240 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-09 00:57:34.452252 | orchestrator | Monday 09 March 2026 00:45:56 +0000 (0:00:00.544) 0:00:06.463 ********** 2026-03-09 00:57:34.452264 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.452275 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.452286 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.452297 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.453040 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.453063 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.453071 | orchestrator | 2026-03-09 00:57:34.453079 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-09 00:57:34.453088 | orchestrator | Monday 09 March 2026 00:45:57 +0000 (0:00:00.567) 0:00:07.031 ********** 2026-03-09 00:57:34.453115 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.453128 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.453139 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.453149 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.453161 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.453171 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.453182 | orchestrator | 2026-03-09 00:57:34.453193 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-09 00:57:34.453220 | orchestrator | Monday 09 March 2026 00:45:57 +0000 (0:00:00.577) 0:00:07.609 ********** 2026-03-09 00:57:34.453233 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.453245 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.453257 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.453268 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.453279 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.453287 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.453295 | orchestrator | 2026-03-09 00:57:34.453303 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-09 00:57:34.453311 | orchestrator | Monday 09 March 2026 00:45:58 +0000 (0:00:00.990) 0:00:08.600 ********** 2026-03-09 00:57:34.453319 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 00:57:34.453327 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 00:57:34.453336 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 00:57:34.453347 | orchestrator | 2026-03-09 00:57:34.453357 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-09 00:57:34.453367 | orchestrator | Monday 09 March 2026 00:45:59 +0000 (0:00:00.875) 0:00:09.475 ********** 2026-03-09 00:57:34.453378 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.453389 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.453396 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.453418 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.453425 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.453431 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.453437 | orchestrator | 2026-03-09 00:57:34.453443 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-09 00:57:34.453450 | orchestrator | Monday 09 March 2026 00:46:01 +0000 (0:00:02.061) 0:00:11.538 ********** 2026-03-09 00:57:34.453456 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 00:57:34.453463 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 00:57:34.453469 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 00:57:34.453485 | orchestrator | 2026-03-09 00:57:34.453492 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-09 00:57:34.453498 | orchestrator | Monday 09 March 2026 00:46:04 +0000 (0:00:02.482) 0:00:14.021 ********** 2026-03-09 00:57:34.453504 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-09 00:57:34.453511 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-09 00:57:34.453517 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-09 00:57:34.453523 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.453529 | orchestrator | 2026-03-09 00:57:34.453535 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-09 00:57:34.453542 | orchestrator | Monday 09 March 2026 00:46:05 +0000 (0:00:00.923) 0:00:14.945 ********** 2026-03-09 00:57:34.453550 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.453559 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.453565 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.453571 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.453578 | orchestrator | 2026-03-09 00:57:34.453584 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-09 00:57:34.453590 | orchestrator | Monday 09 March 2026 00:46:07 +0000 (0:00:01.735) 0:00:16.681 ********** 2026-03-09 00:57:34.453598 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.453608 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.453620 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.453627 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.453633 | orchestrator | 2026-03-09 00:57:34.453640 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-09 00:57:34.453646 | orchestrator | Monday 09 March 2026 00:46:07 +0000 (0:00:00.808) 0:00:17.489 ********** 2026-03-09 00:57:34.453661 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-09 00:46:02.567530', 'end': '2026-03-09 00:46:02.690247', 'delta': '0:00:00.122717', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.453675 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-09 00:46:03.392603', 'end': '2026-03-09 00:46:03.497326', 'delta': '0:00:00.104723', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.453682 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-09 00:46:04.059385', 'end': '2026-03-09 00:46:04.148596', 'delta': '0:00:00.089211', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.453689 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.453695 | orchestrator | 2026-03-09 00:57:34.453702 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-09 00:57:34.453708 | orchestrator | Monday 09 March 2026 00:46:07 +0000 (0:00:00.153) 0:00:17.643 ********** 2026-03-09 00:57:34.453714 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.453720 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.453728 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.453738 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.453748 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.453759 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.453767 | orchestrator | 2026-03-09 00:57:34.453777 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-09 00:57:34.453787 | orchestrator | Monday 09 March 2026 00:46:10 +0000 (0:00:02.387) 0:00:20.031 ********** 2026-03-09 00:57:34.453797 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-09 00:57:34.453807 | orchestrator | 2026-03-09 00:57:34.453817 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-09 00:57:34.453827 | orchestrator | Monday 09 March 2026 00:46:11 +0000 (0:00:00.764) 0:00:20.795 ********** 2026-03-09 00:57:34.453838 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.453848 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.453882 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.453894 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.453905 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.453915 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.453926 | orchestrator | 2026-03-09 00:57:34.453937 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-09 00:57:34.453947 | orchestrator | Monday 09 March 2026 00:46:13 +0000 (0:00:02.114) 0:00:22.910 ********** 2026-03-09 00:57:34.453958 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.453968 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.453979 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.453985 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.453999 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.454005 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.454011 | orchestrator | 2026-03-09 00:57:34.454045 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-09 00:57:34.454052 | orchestrator | Monday 09 March 2026 00:46:15 +0000 (0:00:02.262) 0:00:25.173 ********** 2026-03-09 00:57:34.454062 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.454069 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.454075 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.454081 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.454087 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.454093 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.454099 | orchestrator | 2026-03-09 00:57:34.454105 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-09 00:57:34.454112 | orchestrator | Monday 09 March 2026 00:46:17 +0000 (0:00:01.646) 0:00:26.819 ********** 2026-03-09 00:57:34.454118 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.454124 | orchestrator | 2026-03-09 00:57:34.454130 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-09 00:57:34.454168 | orchestrator | Monday 09 March 2026 00:46:17 +0000 (0:00:00.157) 0:00:26.977 ********** 2026-03-09 00:57:34.454176 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.454182 | orchestrator | 2026-03-09 00:57:34.454188 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-09 00:57:34.454227 | orchestrator | Monday 09 March 2026 00:46:17 +0000 (0:00:00.244) 0:00:27.222 ********** 2026-03-09 00:57:34.454234 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.454240 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.454266 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.454300 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.454308 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.454314 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.454320 | orchestrator | 2026-03-09 00:57:34.454327 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-09 00:57:34.454333 | orchestrator | Monday 09 March 2026 00:46:18 +0000 (0:00:00.592) 0:00:27.814 ********** 2026-03-09 00:57:34.454340 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.454346 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.454352 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.454358 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.454365 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.454371 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.454377 | orchestrator | 2026-03-09 00:57:34.454383 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-09 00:57:34.454390 | orchestrator | Monday 09 March 2026 00:46:19 +0000 (0:00:00.856) 0:00:28.671 ********** 2026-03-09 00:57:34.454396 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.454402 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.454408 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.454414 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.454421 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.454427 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.454433 | orchestrator | 2026-03-09 00:57:34.454439 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-09 00:57:34.454446 | orchestrator | Monday 09 March 2026 00:46:19 +0000 (0:00:00.854) 0:00:29.526 ********** 2026-03-09 00:57:34.454452 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.454458 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.454464 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.454470 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.454477 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.454483 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.454489 | orchestrator | 2026-03-09 00:57:34.454496 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-09 00:57:34.454507 | orchestrator | Monday 09 March 2026 00:46:20 +0000 (0:00:01.020) 0:00:30.546 ********** 2026-03-09 00:57:34.454514 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.454520 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.454526 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.454533 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.454539 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.454545 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.454551 | orchestrator | 2026-03-09 00:57:34.454558 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-09 00:57:34.454564 | orchestrator | Monday 09 March 2026 00:46:21 +0000 (0:00:01.074) 0:00:31.621 ********** 2026-03-09 00:57:34.454570 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.454576 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.454583 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.454589 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.454595 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.454601 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.454607 | orchestrator | 2026-03-09 00:57:34.454614 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-09 00:57:34.454620 | orchestrator | Monday 09 March 2026 00:46:22 +0000 (0:00:00.678) 0:00:32.299 ********** 2026-03-09 00:57:34.454627 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.454633 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.454639 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.454645 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.454652 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.454658 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.454664 | orchestrator | 2026-03-09 00:57:34.454670 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-09 00:57:34.454677 | orchestrator | Monday 09 March 2026 00:46:23 +0000 (0:00:00.589) 0:00:32.889 ********** 2026-03-09 00:57:34.454685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5d9cda85--a301--5b16--a7fe--308b162b7259-osd--block--5d9cda85--a301--5b16--a7fe--308b162b7259', 'dm-uuid-LVM-HMglKMgOarJt39elepRreQ13BbpBTpIwgcHAQSWoKwrA5ROauy6uoqWqljFkY8Uw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.454693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8734b320--4ffe--530d--8e73--0aec819257b4-osd--block--8734b320--4ffe--530d--8e73--0aec819257b4', 'dm-uuid-LVM-0oRFpggrbg2gDDWUKFXLRyOv3OVjB5p678FZlGpzndE4EOgbqu12F7mdcfnww5Ot'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.454706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.454778 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.454801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.454808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.454815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.454821 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.454828 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.454834 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.454844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--deb603ca--2db3--5399--8e8d--1e0d01641e0c-osd--block--deb603ca--2db3--5399--8e8d--1e0d01641e0c', 'dm-uuid-LVM-ymUw0TIiv27vbmGZKzqUO1xTKJjd4LELlXUeXZ0R5xZpaLUzedLAgzLI2r7WHUmD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.454924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c1f67558--6290--50a7--9c09--ea5e74fb08ab-osd--block--c1f67558--6290--50a7--9c09--ea5e74fb08ab', 'dm-uuid-LVM-YsXR6FhgZvrm6EivKPjX3dlWMAJuQcNNm5yd8wUg87KYebMgLJonznJrEwWBLQt0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.454950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa', 'scsi-SQEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:57:34.454968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.454984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5d9cda85--a301--5b16--a7fe--308b162b7259-osd--block--5d9cda85--a301--5b16--a7fe--308b162b7259'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-m7qQkp-HCAP-ekOq-9sXu-j33Q-bbOW-LIzZw2', 'scsi-0QEMU_QEMU_HARDDISK_26907958-5014-4e4e-aaae-f132ebc9345b', 'scsi-SQEMU_QEMU_HARDDISK_26907958-5014-4e4e-aaae-f132ebc9345b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:57:34.454997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8734b320--4ffe--530d--8e73--0aec819257b4-osd--block--8734b320--4ffe--530d--8e73--0aec819257b4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-n1IWuy-ahs9-DtYW-xlXK-0evh-ueMl-JlfPEM', 'scsi-0QEMU_QEMU_HARDDISK_763f54df-2df6-4a17-b758-6e7498448fae', 'scsi-SQEMU_QEMU_HARDDISK_763f54df-2df6-4a17-b758-6e7498448fae'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:57:34.455022 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49ad7546-ef2d-4696-ae5b-c2e2e05846ff', 'scsi-SQEMU_QEMU_HARDDISK_49ad7546-ef2d-4696-ae5b-c2e2e05846ff'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:57:34.455029 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-02-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:57:34.455043 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455050 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455059 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455069 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455086 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.455103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2', 'scsi-SQEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part1', 'scsi-SQEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part14', 'scsi-SQEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part15', 'scsi-SQEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part16', 'scsi-SQEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:57:34.455130 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--deb603ca--2db3--5399--8e8d--1e0d01641e0c-osd--block--deb603ca--2db3--5399--8e8d--1e0d01641e0c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4hCwG2-6dQd-RLGd-XZAt-F0Bt-0Qyo-ciHyUq', 'scsi-0QEMU_QEMU_HARDDISK_11658218-3952-45bc-99ae-d48f4d257268', 'scsi-SQEMU_QEMU_HARDDISK_11658218-3952-45bc-99ae-d48f4d257268'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:57:34.455141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5d8e344b--ecd1--5c90--b783--cb125ac7004a-osd--block--5d8e344b--ecd1--5c90--b783--cb125ac7004a', 'dm-uuid-LVM-orG5ExLC2iY5BVLcplh0u9DLThIpvNX3KJaplDmRqeZKenRtB4QpeuCWOXw3PgzH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d6be2487--d224--518f--9009--30806e6fa587-osd--block--d6be2487--d224--518f--9009--30806e6fa587', 'dm-uuid-LVM-KL2LYmO1kTxlUYYUFh2gjCBXeECjhxCTJB1356ftbeK9beZSHRhKJu7vShqwZTE5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c1f67558--6290--50a7--9c09--ea5e74fb08ab-osd--block--c1f67558--6290--50a7--9c09--ea5e74fb08ab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bgCf24-uSBe-cwb8-qv4r-q8a4-cjFj-uwPZpD', 'scsi-0QEMU_QEMU_HARDDISK_d43c938e-9c3c-4e95-bc09-26edff92b810', 'scsi-SQEMU_QEMU_HARDDISK_d43c938e-9c3c-4e95-bc09-26edff92b810'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:57:34.455188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455194 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a13d83a-3534-4183-8691-9f150495a6dc', 'scsi-SQEMU_QEMU_HARDDISK_3a13d83a-3534-4183-8691-9f150495a6dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:57:34.455201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455239 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-02-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:57:34.455246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455252 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455265 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.455272 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455305 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3247b01-1045-414b-8d68-d46805c465ad', 'scsi-SQEMU_QEMU_HARDDISK_c3247b01-1045-414b-8d68-d46805c465ad'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3247b01-1045-414b-8d68-d46805c465ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_c3247b01-1045-414b-8d68-d46805c465ad-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3247b01-1045-414b-8d68-d46805c465ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_c3247b01-1045-414b-8d68-d46805c465ad-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3247b01-1045-414b-8d68-d46805c465ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_c3247b01-1045-414b-8d68-d46805c465ad-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3247b01-1045-414b-8d68-d46805c465ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_c3247b01-1045-414b-8d68-d46805c465ad-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:57:34.455342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-02-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:57:34.455353 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455367 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a', 'scsi-SQEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part1', 'scsi-SQEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part14', 'scsi-SQEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part15', 'scsi-SQEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part16', 'scsi-SQEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:57:34.455374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455397 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5d8e344b--ecd1--5c90--b783--cb125ac7004a-osd--block--5d8e344b--ecd1--5c90--b783--cb125ac7004a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tkYHv1-MxvC-0X6I-bNAj-IW5c-doAI-n0j1mI', 'scsi-0QEMU_QEMU_HARDDISK_34bdd215-cdf5-4909-8dd4-972bf1b79030', 'scsi-SQEMU_QEMU_HARDDISK_34bdd215-cdf5-4909-8dd4-972bf1b79030'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:57:34.455409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d6be2487--d224--518f--9009--30806e6fa587-osd--block--d6be2487--d224--518f--9009--30806e6fa587'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hAH1cU-weZP-I8mi-YRcA-iLqE-N7sJ-khZnrk', 'scsi-0QEMU_QEMU_HARDDISK_709b939c-9ac4-47b1-b5c3-cb1d8710b2fd', 'scsi-SQEMU_QEMU_HARDDISK_709b939c-9ac4-47b1-b5c3-cb1d8710b2fd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:57:34.455434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_069ee836-7f84-4f9f-9b43-0fd45db025c2', 'scsi-SQEMU_QEMU_HARDDISK_069ee836-7f84-4f9f-9b43-0fd45db025c2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:57:34.455473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a731137-5cc5-4157-94da-3d583abc100b', 'scsi-SQEMU_QEMU_HARDDISK_3a731137-5cc5-4157-94da-3d583abc100b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a731137-5cc5-4157-94da-3d583abc100b-part1', 'scsi-SQEMU_QEMU_HARDDISK_3a731137-5cc5-4157-94da-3d583abc100b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a731137-5cc5-4157-94da-3d583abc100b-part14', 'scsi-SQEMU_QEMU_HARDDISK_3a731137-5cc5-4157-94da-3d583abc100b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a731137-5cc5-4157-94da-3d583abc100b-part15', 'scsi-SQEMU_QEMU_HARDDISK_3a731137-5cc5-4157-94da-3d583abc100b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a731137-5cc5-4157-94da-3d583abc100b-part16', 'scsi-SQEMU_QEMU_HARDDISK_3a731137-5cc5-4157-94da-3d583abc100b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:57:34.455481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-02-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:57:34.455488 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-02-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:57:34.455495 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.455501 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.455507 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.455514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:57:34.455591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1ae549f-778a-485f-b059-8e9bc989d7ac', 'scsi-SQEMU_QEMU_HARDDISK_d1ae549f-778a-485f-b059-8e9bc989d7ac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1ae549f-778a-485f-b059-8e9bc989d7ac-part1', 'scsi-SQEMU_QEMU_HARDDISK_d1ae549f-778a-485f-b059-8e9bc989d7ac-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1ae549f-778a-485f-b059-8e9bc989d7ac-part14', 'scsi-SQEMU_QEMU_HARDDISK_d1ae549f-778a-485f-b059-8e9bc989d7ac-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1ae549f-778a-485f-b059-8e9bc989d7ac-part15', 'scsi-SQEMU_QEMU_HARDDISK_d1ae549f-778a-485f-b059-8e9bc989d7ac-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1ae549f-778a-485f-b059-8e9bc989d7ac-part16', 'scsi-SQEMU_QEMU_HARDDISK_d1ae549f-778a-485f-b059-8e9bc989d7ac-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:57:34.455607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-02-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:57:34.455614 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.455620 | orchestrator | 2026-03-09 00:57:34.455627 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-09 00:57:34.455633 | orchestrator | Monday 09 March 2026 00:46:24 +0000 (0:00:01.448) 0:00:34.337 ********** 2026-03-09 00:57:34.455641 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5d9cda85--a301--5b16--a7fe--308b162b7259-osd--block--5d9cda85--a301--5b16--a7fe--308b162b7259', 'dm-uuid-LVM-HMglKMgOarJt39elepRreQ13BbpBTpIwgcHAQSWoKwrA5ROauy6uoqWqljFkY8Uw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.455649 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8734b320--4ffe--530d--8e73--0aec819257b4-osd--block--8734b320--4ffe--530d--8e73--0aec819257b4', 'dm-uuid-LVM-0oRFpggrbg2gDDWUKFXLRyOv3OVjB5p678FZlGpzndE4EOgbqu12F7mdcfnww5Ot'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.455655 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.455669 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.455676 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.456853 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.456935 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.456942 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.456949 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.456968 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.456982 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--deb603ca--2db3--5399--8e8d--1e0d01641e0c-osd--block--deb603ca--2db3--5399--8e8d--1e0d01641e0c', 'dm-uuid-LVM-ymUw0TIiv27vbmGZKzqUO1xTKJjd4LELlXUeXZ0R5xZpaLUzedLAgzLI2r7WHUmD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.456998 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c1f67558--6290--50a7--9c09--ea5e74fb08ab-osd--block--c1f67558--6290--50a7--9c09--ea5e74fb08ab', 'dm-uuid-LVM-YsXR6FhgZvrm6EivKPjX3dlWMAJuQcNNm5yd8wUg87KYebMgLJonznJrEwWBLQt0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457006 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa', 'scsi-SQEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457023 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--5d9cda85--a301--5b16--a7fe--308b162b7259-osd--block--5d9cda85--a301--5b16--a7fe--308b162b7259'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-m7qQkp-HCAP-ekOq-9sXu-j33Q-bbOW-LIzZw2', 'scsi-0QEMU_QEMU_HARDDISK_26907958-5014-4e4e-aaae-f132ebc9345b', 'scsi-SQEMU_QEMU_HARDDISK_26907958-5014-4e4e-aaae-f132ebc9345b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457034 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457041 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8734b320--4ffe--530d--8e73--0aec819257b4-osd--block--8734b320--4ffe--530d--8e73--0aec819257b4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-n1IWuy-ahs9-DtYW-xlXK-0evh-ueMl-JlfPEM', 'scsi-0QEMU_QEMU_HARDDISK_763f54df-2df6-4a17-b758-6e7498448fae', 'scsi-SQEMU_QEMU_HARDDISK_763f54df-2df6-4a17-b758-6e7498448fae'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457048 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49ad7546-ef2d-4696-ae5b-c2e2e05846ff', 'scsi-SQEMU_QEMU_HARDDISK_49ad7546-ef2d-4696-ae5b-c2e2e05846ff'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457059 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457066 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-02-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457079 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457090 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457097 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457104 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.457111 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457121 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457128 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457138 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5d8e344b--ecd1--5c90--b783--cb125ac7004a-osd--block--5d8e344b--ecd1--5c90--b783--cb125ac7004a', 'dm-uuid-LVM-orG5ExLC2iY5BVLcplh0u9DLThIpvNX3KJaplDmRqeZKenRtB4QpeuCWOXw3PgzH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457148 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d6be2487--d224--518f--9009--30806e6fa587-osd--block--d6be2487--d224--518f--9009--30806e6fa587', 'dm-uuid-LVM-KL2LYmO1kTxlUYYUFh2gjCBXeECjhxCTJB1356ftbeK9beZSHRhKJu7vShqwZTE5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457155 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2', 'scsi-SQEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part1', 'scsi-SQEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part14', 'scsi-SQEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part15', 'scsi-SQEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part16', 'scsi-SQEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457166 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457176 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457186 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457193 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--deb603ca--2db3--5399--8e8d--1e0d01641e0c-osd--block--deb603ca--2db3--5399--8e8d--1e0d01641e0c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4hCwG2-6dQd-RLGd-XZAt-F0Bt-0Qyo-ciHyUq', 'scsi-0QEMU_QEMU_HARDDISK_11658218-3952-45bc-99ae-d48f4d257268', 'scsi-SQEMU_QEMU_HARDDISK_11658218-3952-45bc-99ae-d48f4d257268'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457204 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c1f67558--6290--50a7--9c09--ea5e74fb08ab-osd--block--c1f67558--6290--50a7--9c09--ea5e74fb08ab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bgCf24-uSBe-cwb8-qv4r-q8a4-cjFj-uwPZpD', 'scsi-0QEMU_QEMU_HARDDISK_d43c938e-9c3c-4e95-bc09-26edff92b810', 'scsi-SQEMU_QEMU_HARDDISK_d43c938e-9c3c-4e95-bc09-26edff92b810'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457211 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457220 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a13d83a-3534-4183-8691-9f150495a6dc', 'scsi-SQEMU_QEMU_HARDDISK_3a13d83a-3534-4183-8691-9f150495a6dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457232 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457238 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457245 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-02-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457255 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457262 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.457268 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457283 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a', 'scsi-SQEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part1', 'scsi-SQEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part14', 'scsi-SQEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part15', 'scsi-SQEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part16', 'scsi-SQEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457295 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457303 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457309 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--5d8e344b--ecd1--5c90--b783--cb125ac7004a-osd--block--5d8e344b--ecd1--5c90--b783--cb125ac7004a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tkYHv1-MxvC-0X6I-bNAj-IW5c-doAI-n0j1mI', 'scsi-0QEMU_QEMU_HARDDISK_34bdd215-cdf5-4909-8dd4-972bf1b79030', 'scsi-SQEMU_QEMU_HARDDISK_34bdd215-cdf5-4909-8dd4-972bf1b79030'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457319 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457330 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457336 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d6be2487--d224--518f--9009--30806e6fa587-osd--block--d6be2487--d224--518f--9009--30806e6fa587'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hAH1cU-weZP-I8mi-YRcA-iLqE-N7sJ-khZnrk', 'scsi-0QEMU_QEMU_HARDDISK_709b939c-9ac4-47b1-b5c3-cb1d8710b2fd', 'scsi-SQEMU_QEMU_HARDDISK_709b939c-9ac4-47b1-b5c3-cb1d8710b2fd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457347 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457354 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457360 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457370 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_069ee836-7f84-4f9f-9b43-0fd45db025c2', 'scsi-SQEMU_QEMU_HARDDISK_069ee836-7f84-4f9f-9b43-0fd45db025c2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457381 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457388 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3247b01-1045-414b-8d68-d46805c465ad', 'scsi-SQEMU_QEMU_HARDDISK_c3247b01-1045-414b-8d68-d46805c465ad'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3247b01-1045-414b-8d68-d46805c465ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_c3247b01-1045-414b-8d68-d46805c465ad-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3247b01-1045-414b-8d68-d46805c465ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_c3247b01-1045-414b-8d68-d46805c465ad-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3247b01-1045-414b-8d68-d46805c465ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_c3247b01-1045-414b-8d68-d46805c465ad-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3247b01-1045-414b-8d68-d46805c465ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_c3247b01-1045-414b-8d68-d46805c465ad-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457402 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-02-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457412 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-02-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457419 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.457425 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457438 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457446 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457454 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457465 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457473 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457485 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457497 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457509 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a731137-5cc5-4157-94da-3d583abc100b', 'scsi-SQEMU_QEMU_HARDDISK_3a731137-5cc5-4157-94da-3d583abc100b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a731137-5cc5-4157-94da-3d583abc100b-part1', 'scsi-SQEMU_QEMU_HARDDISK_3a731137-5cc5-4157-94da-3d583abc100b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a731137-5cc5-4157-94da-3d583abc100b-part14', 'scsi-SQEMU_QEMU_HARDDISK_3a731137-5cc5-4157-94da-3d583abc100b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a731137-5cc5-4157-94da-3d583abc100b-part15', 'scsi-SQEMU_QEMU_HARDDISK_3a731137-5cc5-4157-94da-3d583abc100b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a731137-5cc5-4157-94da-3d583abc100b-part16', 'scsi-SQEMU_QEMU_HARDDISK_3a731137-5cc5-4157-94da-3d583abc100b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457521 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-02-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457529 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.457536 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.457547 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457555 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457562 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457570 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457577 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457587 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457598 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457609 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457621 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1ae549f-778a-485f-b059-8e9bc989d7ac', 'scsi-SQEMU_QEMU_HARDDISK_d1ae549f-778a-485f-b059-8e9bc989d7ac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1ae549f-778a-485f-b059-8e9bc989d7ac-part1', 'scsi-SQEMU_QEMU_HARDDISK_d1ae549f-778a-485f-b059-8e9bc989d7ac-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1ae549f-778a-485f-b059-8e9bc989d7ac-part14', 'scsi-SQEMU_QEMU_HARDDISK_d1ae549f-778a-485f-b059-8e9bc989d7ac-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1ae549f-778a-485f-b059-8e9bc989d7ac-part15', 'scsi-SQEMU_QEMU_HARDDISK_d1ae549f-778a-485f-b059-8e9bc989d7ac-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1ae549f-778a-485f-b059-8e9bc989d7ac-part16', 'scsi-SQEMU_QEMU_HARDDISK_d1ae549f-778a-485f-b059-8e9bc989d7ac-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457629 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-02-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:57:34.457639 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.457645 | orchestrator | 2026-03-09 00:57:34.457655 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-09 00:57:34.457662 | orchestrator | Monday 09 March 2026 00:46:26 +0000 (0:00:02.131) 0:00:36.469 ********** 2026-03-09 00:57:34.457669 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.457675 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.457682 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.457688 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.457694 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.457700 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.457707 | orchestrator | 2026-03-09 00:57:34.457713 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-09 00:57:34.457719 | orchestrator | Monday 09 March 2026 00:46:28 +0000 (0:00:01.524) 0:00:37.994 ********** 2026-03-09 00:57:34.457725 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.457732 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.457738 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.457744 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.457750 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.457756 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.457762 | orchestrator | 2026-03-09 00:57:34.457768 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-09 00:57:34.457774 | orchestrator | Monday 09 March 2026 00:46:29 +0000 (0:00:00.815) 0:00:38.809 ********** 2026-03-09 00:57:34.457780 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.457787 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.457793 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.457799 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.457808 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.457818 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.457827 | orchestrator | 2026-03-09 00:57:34.457837 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-09 00:57:34.457846 | orchestrator | Monday 09 March 2026 00:46:29 +0000 (0:00:00.781) 0:00:39.591 ********** 2026-03-09 00:57:34.457856 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.457883 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.457890 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.457896 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.457902 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.457909 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.457915 | orchestrator | 2026-03-09 00:57:34.457922 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-09 00:57:34.457933 | orchestrator | Monday 09 March 2026 00:46:30 +0000 (0:00:00.626) 0:00:40.217 ********** 2026-03-09 00:57:34.457944 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.457954 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.457964 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.457973 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.457983 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.457993 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.458003 | orchestrator | 2026-03-09 00:57:34.458068 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-09 00:57:34.458077 | orchestrator | Monday 09 March 2026 00:46:31 +0000 (0:00:00.904) 0:00:41.122 ********** 2026-03-09 00:57:34.458084 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.458090 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.458096 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.458102 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.458109 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.458115 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.458121 | orchestrator | 2026-03-09 00:57:34.458127 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-09 00:57:34.458143 | orchestrator | Monday 09 March 2026 00:46:32 +0000 (0:00:00.682) 0:00:41.804 ********** 2026-03-09 00:57:34.458157 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-09 00:57:34.458171 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-09 00:57:34.458181 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-09 00:57:34.458191 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-09 00:57:34.458200 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-09 00:57:34.458209 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-09 00:57:34.458220 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-09 00:57:34.458232 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-09 00:57:34.458244 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-09 00:57:34.458255 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-09 00:57:34.458265 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-09 00:57:34.458275 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-09 00:57:34.458284 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-09 00:57:34.458295 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-09 00:57:34.458312 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-09 00:57:34.458322 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-09 00:57:34.458331 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-09 00:57:34.458341 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-09 00:57:34.458351 | orchestrator | 2026-03-09 00:57:34.458361 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-09 00:57:34.458371 | orchestrator | Monday 09 March 2026 00:46:35 +0000 (0:00:02.932) 0:00:44.736 ********** 2026-03-09 00:57:34.458381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-09 00:57:34.458392 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-09 00:57:34.458401 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-09 00:57:34.458413 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.458420 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-09 00:57:34.458426 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-09 00:57:34.458432 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-09 00:57:34.458438 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-09 00:57:34.458459 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-09 00:57:34.458465 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-09 00:57:34.458472 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.458478 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-09 00:57:34.458484 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-09 00:57:34.458490 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-09 00:57:34.458496 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.458502 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-09 00:57:34.458508 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-09 00:57:34.458514 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-09 00:57:34.458521 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.458527 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.458533 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-09 00:57:34.458539 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-09 00:57:34.458545 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-09 00:57:34.458551 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.458557 | orchestrator | 2026-03-09 00:57:34.458570 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-09 00:57:34.458576 | orchestrator | Monday 09 March 2026 00:46:35 +0000 (0:00:00.895) 0:00:45.632 ********** 2026-03-09 00:57:34.458583 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.458589 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.458595 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.458601 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.458608 | orchestrator | 2026-03-09 00:57:34.458614 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-09 00:57:34.458622 | orchestrator | Monday 09 March 2026 00:46:37 +0000 (0:00:01.440) 0:00:47.072 ********** 2026-03-09 00:57:34.458628 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.458634 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.458640 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.458646 | orchestrator | 2026-03-09 00:57:34.458652 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-09 00:57:34.458659 | orchestrator | Monday 09 March 2026 00:46:37 +0000 (0:00:00.358) 0:00:47.431 ********** 2026-03-09 00:57:34.458665 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.458671 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.458677 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.458684 | orchestrator | 2026-03-09 00:57:34.458690 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-09 00:57:34.458696 | orchestrator | Monday 09 March 2026 00:46:38 +0000 (0:00:00.312) 0:00:47.744 ********** 2026-03-09 00:57:34.458702 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.458710 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.458721 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.458736 | orchestrator | 2026-03-09 00:57:34.458748 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-09 00:57:34.458757 | orchestrator | Monday 09 March 2026 00:46:38 +0000 (0:00:00.414) 0:00:48.158 ********** 2026-03-09 00:57:34.458767 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.458776 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.458786 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.458856 | orchestrator | 2026-03-09 00:57:34.458922 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-09 00:57:34.458933 | orchestrator | Monday 09 March 2026 00:46:39 +0000 (0:00:00.533) 0:00:48.692 ********** 2026-03-09 00:57:34.458945 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:57:34.458957 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:57:34.458969 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:57:34.458981 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.458993 | orchestrator | 2026-03-09 00:57:34.459005 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-09 00:57:34.459016 | orchestrator | Monday 09 March 2026 00:46:39 +0000 (0:00:00.428) 0:00:49.121 ********** 2026-03-09 00:57:34.459026 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:57:34.459036 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:57:34.459047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:57:34.459066 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.459077 | orchestrator | 2026-03-09 00:57:34.459088 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-09 00:57:34.459101 | orchestrator | Monday 09 March 2026 00:46:39 +0000 (0:00:00.375) 0:00:49.496 ********** 2026-03-09 00:57:34.459111 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:57:34.459123 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:57:34.459134 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:57:34.459156 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.459166 | orchestrator | 2026-03-09 00:57:34.459176 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-09 00:57:34.459187 | orchestrator | Monday 09 March 2026 00:46:40 +0000 (0:00:00.392) 0:00:49.888 ********** 2026-03-09 00:57:34.459198 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.459209 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.459220 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.459231 | orchestrator | 2026-03-09 00:57:34.459242 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-09 00:57:34.459252 | orchestrator | Monday 09 March 2026 00:46:40 +0000 (0:00:00.450) 0:00:50.339 ********** 2026-03-09 00:57:34.459263 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-09 00:57:34.459274 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-09 00:57:34.459297 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-09 00:57:34.459307 | orchestrator | 2026-03-09 00:57:34.459317 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-09 00:57:34.459327 | orchestrator | Monday 09 March 2026 00:46:41 +0000 (0:00:01.269) 0:00:51.608 ********** 2026-03-09 00:57:34.459337 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 00:57:34.459348 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 00:57:34.459359 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 00:57:34.459370 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-09 00:57:34.459380 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-09 00:57:34.459391 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-09 00:57:34.459401 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-09 00:57:34.459411 | orchestrator | 2026-03-09 00:57:34.459422 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-09 00:57:34.459431 | orchestrator | Monday 09 March 2026 00:46:42 +0000 (0:00:00.878) 0:00:52.486 ********** 2026-03-09 00:57:34.459437 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 00:57:34.459443 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 00:57:34.459449 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 00:57:34.459455 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-09 00:57:34.459462 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-09 00:57:34.459468 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-09 00:57:34.459474 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-09 00:57:34.459482 | orchestrator | 2026-03-09 00:57:34.459492 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-09 00:57:34.459502 | orchestrator | Monday 09 March 2026 00:46:44 +0000 (0:00:02.076) 0:00:54.563 ********** 2026-03-09 00:57:34.459512 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.459524 | orchestrator | 2026-03-09 00:57:34.459535 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-09 00:57:34.459545 | orchestrator | Monday 09 March 2026 00:46:46 +0000 (0:00:01.502) 0:00:56.065 ********** 2026-03-09 00:57:34.459556 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.459566 | orchestrator | 2026-03-09 00:57:34.459576 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-09 00:57:34.459595 | orchestrator | Monday 09 March 2026 00:46:47 +0000 (0:00:01.268) 0:00:57.333 ********** 2026-03-09 00:57:34.459606 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.459614 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.459620 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.459626 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.459632 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.459639 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.459645 | orchestrator | 2026-03-09 00:57:34.459651 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-09 00:57:34.459657 | orchestrator | Monday 09 March 2026 00:46:49 +0000 (0:00:01.558) 0:00:58.892 ********** 2026-03-09 00:57:34.459663 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.459670 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.459676 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.459682 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.459688 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.459694 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.459700 | orchestrator | 2026-03-09 00:57:34.459706 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-09 00:57:34.459716 | orchestrator | Monday 09 March 2026 00:46:50 +0000 (0:00:00.786) 0:00:59.678 ********** 2026-03-09 00:57:34.459738 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.459747 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.459757 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.459766 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.459776 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.459785 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.459795 | orchestrator | 2026-03-09 00:57:34.459805 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-09 00:57:34.459815 | orchestrator | Monday 09 March 2026 00:46:51 +0000 (0:00:01.024) 0:01:00.703 ********** 2026-03-09 00:57:34.459825 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.459836 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.459846 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.459857 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.459890 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.459900 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.459911 | orchestrator | 2026-03-09 00:57:34.459922 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-09 00:57:34.459931 | orchestrator | Monday 09 March 2026 00:46:52 +0000 (0:00:01.171) 0:01:01.875 ********** 2026-03-09 00:57:34.459937 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.459944 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.459950 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.459956 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.459962 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.459975 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.459982 | orchestrator | 2026-03-09 00:57:34.459988 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-09 00:57:34.459994 | orchestrator | Monday 09 March 2026 00:46:54 +0000 (0:00:02.062) 0:01:03.937 ********** 2026-03-09 00:57:34.460000 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.460006 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.460013 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.460019 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.460025 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.460031 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.460037 | orchestrator | 2026-03-09 00:57:34.460043 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-09 00:57:34.460050 | orchestrator | Monday 09 March 2026 00:46:55 +0000 (0:00:01.412) 0:01:05.349 ********** 2026-03-09 00:57:34.460056 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.460062 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.460075 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.460081 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.460087 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.460093 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.460099 | orchestrator | 2026-03-09 00:57:34.460105 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-09 00:57:34.460112 | orchestrator | Monday 09 March 2026 00:46:57 +0000 (0:00:01.457) 0:01:06.807 ********** 2026-03-09 00:57:34.460118 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.460124 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.460130 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.460137 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.460143 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.460149 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.460155 | orchestrator | 2026-03-09 00:57:34.460161 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-09 00:57:34.460168 | orchestrator | Monday 09 March 2026 00:46:59 +0000 (0:00:02.591) 0:01:09.398 ********** 2026-03-09 00:57:34.460174 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.460180 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.460186 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.460192 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.460198 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.460205 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.460211 | orchestrator | 2026-03-09 00:57:34.460219 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-09 00:57:34.460229 | orchestrator | Monday 09 March 2026 00:47:01 +0000 (0:00:01.894) 0:01:11.293 ********** 2026-03-09 00:57:34.460240 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.460251 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.460260 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.460269 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.460279 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.460290 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.460300 | orchestrator | 2026-03-09 00:57:34.460311 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-09 00:57:34.460321 | orchestrator | Monday 09 March 2026 00:47:02 +0000 (0:00:00.602) 0:01:11.895 ********** 2026-03-09 00:57:34.460331 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.460339 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.460345 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.460351 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.460358 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.460364 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.460370 | orchestrator | 2026-03-09 00:57:34.460377 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-09 00:57:34.460383 | orchestrator | Monday 09 March 2026 00:47:03 +0000 (0:00:00.995) 0:01:12.890 ********** 2026-03-09 00:57:34.460389 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.460395 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.460401 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.460407 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.460414 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.460420 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.460426 | orchestrator | 2026-03-09 00:57:34.460432 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-09 00:57:34.460439 | orchestrator | Monday 09 March 2026 00:47:04 +0000 (0:00:01.010) 0:01:13.901 ********** 2026-03-09 00:57:34.460445 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.460451 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.460457 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.460463 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.460469 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.460476 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.460487 | orchestrator | 2026-03-09 00:57:34.460493 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-09 00:57:34.460504 | orchestrator | Monday 09 March 2026 00:47:05 +0000 (0:00:01.504) 0:01:15.405 ********** 2026-03-09 00:57:34.460510 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.460516 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.460522 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.460528 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.460534 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.460540 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.460547 | orchestrator | 2026-03-09 00:57:34.460553 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-09 00:57:34.460559 | orchestrator | Monday 09 March 2026 00:47:07 +0000 (0:00:01.462) 0:01:16.867 ********** 2026-03-09 00:57:34.460565 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.460571 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.460577 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.460583 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.460589 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.460595 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.460601 | orchestrator | 2026-03-09 00:57:34.460608 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-09 00:57:34.460614 | orchestrator | Monday 09 March 2026 00:47:08 +0000 (0:00:01.263) 0:01:18.131 ********** 2026-03-09 00:57:34.460620 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.460626 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.460632 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.460639 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.460650 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.460656 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.460662 | orchestrator | 2026-03-09 00:57:34.460669 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-09 00:57:34.460675 | orchestrator | Monday 09 March 2026 00:47:09 +0000 (0:00:00.877) 0:01:19.009 ********** 2026-03-09 00:57:34.460681 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.460688 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.460694 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.460700 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.460706 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.460712 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.460718 | orchestrator | 2026-03-09 00:57:34.460725 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-09 00:57:34.460731 | orchestrator | Monday 09 March 2026 00:47:10 +0000 (0:00:01.267) 0:01:20.276 ********** 2026-03-09 00:57:34.460737 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.460743 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.460749 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.460756 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.460762 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.460768 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.460774 | orchestrator | 2026-03-09 00:57:34.460780 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-09 00:57:34.460786 | orchestrator | Monday 09 March 2026 00:47:11 +0000 (0:00:00.851) 0:01:21.128 ********** 2026-03-09 00:57:34.460792 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.460799 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.460805 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.460811 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.460817 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.460823 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.460829 | orchestrator | 2026-03-09 00:57:34.460835 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-09 00:57:34.460841 | orchestrator | Monday 09 March 2026 00:47:12 +0000 (0:00:01.358) 0:01:22.486 ********** 2026-03-09 00:57:34.460847 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.460858 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.460921 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.460932 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.460943 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.460954 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.460964 | orchestrator | 2026-03-09 00:57:34.460975 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-09 00:57:34.460981 | orchestrator | Monday 09 March 2026 00:47:14 +0000 (0:00:01.937) 0:01:24.423 ********** 2026-03-09 00:57:34.460988 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.460994 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.461000 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.461006 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.461013 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.461019 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.461025 | orchestrator | 2026-03-09 00:57:34.461031 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-09 00:57:34.461038 | orchestrator | Monday 09 March 2026 00:47:17 +0000 (0:00:02.429) 0:01:26.853 ********** 2026-03-09 00:57:34.461045 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.461051 | orchestrator | 2026-03-09 00:57:34.461057 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-09 00:57:34.461064 | orchestrator | Monday 09 March 2026 00:47:18 +0000 (0:00:01.177) 0:01:28.030 ********** 2026-03-09 00:57:34.461070 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.461076 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.461082 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.461088 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.461094 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.461101 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.461107 | orchestrator | 2026-03-09 00:57:34.461113 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-09 00:57:34.461119 | orchestrator | Monday 09 March 2026 00:47:18 +0000 (0:00:00.563) 0:01:28.593 ********** 2026-03-09 00:57:34.461125 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.461132 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.461138 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.461144 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.461150 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.461156 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.461162 | orchestrator | 2026-03-09 00:57:34.461168 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-09 00:57:34.461179 | orchestrator | Monday 09 March 2026 00:47:19 +0000 (0:00:00.757) 0:01:29.351 ********** 2026-03-09 00:57:34.461186 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-09 00:57:34.461192 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-09 00:57:34.461198 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-09 00:57:34.461204 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-09 00:57:34.461210 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-09 00:57:34.461216 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-09 00:57:34.461223 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-09 00:57:34.461229 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-09 00:57:34.461235 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-09 00:57:34.461242 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-09 00:57:34.461259 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-09 00:57:34.461265 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-09 00:57:34.461271 | orchestrator | 2026-03-09 00:57:34.461278 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-09 00:57:34.461284 | orchestrator | Monday 09 March 2026 00:47:21 +0000 (0:00:01.343) 0:01:30.695 ********** 2026-03-09 00:57:34.461290 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.461297 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.461303 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.461309 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.461315 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.461322 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.461328 | orchestrator | 2026-03-09 00:57:34.461334 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-09 00:57:34.461340 | orchestrator | Monday 09 March 2026 00:47:22 +0000 (0:00:01.201) 0:01:31.897 ********** 2026-03-09 00:57:34.461346 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.461352 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.461358 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.461365 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.461371 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.461377 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.461383 | orchestrator | 2026-03-09 00:57:34.461389 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-09 00:57:34.461395 | orchestrator | Monday 09 March 2026 00:47:22 +0000 (0:00:00.631) 0:01:32.529 ********** 2026-03-09 00:57:34.461403 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.461414 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.461424 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.461434 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.461444 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.461455 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.461465 | orchestrator | 2026-03-09 00:57:34.461476 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-09 00:57:34.461486 | orchestrator | Monday 09 March 2026 00:47:23 +0000 (0:00:00.782) 0:01:33.311 ********** 2026-03-09 00:57:34.461494 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.461501 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.461507 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.461513 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.461519 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.461525 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.461531 | orchestrator | 2026-03-09 00:57:34.461537 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-09 00:57:34.461544 | orchestrator | Monday 09 March 2026 00:47:24 +0000 (0:00:00.562) 0:01:33.874 ********** 2026-03-09 00:57:34.461550 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.461557 | orchestrator | 2026-03-09 00:57:34.461563 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-09 00:57:34.461569 | orchestrator | Monday 09 March 2026 00:47:25 +0000 (0:00:01.240) 0:01:35.114 ********** 2026-03-09 00:57:34.461575 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.461581 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.461588 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.461594 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.461600 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.461606 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.461612 | orchestrator | 2026-03-09 00:57:34.461618 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-09 00:57:34.461630 | orchestrator | Monday 09 March 2026 00:48:13 +0000 (0:00:47.571) 0:02:22.686 ********** 2026-03-09 00:57:34.461636 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-09 00:57:34.461642 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-09 00:57:34.461649 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-09 00:57:34.461655 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.461661 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-09 00:57:34.461667 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-09 00:57:34.461674 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-09 00:57:34.461683 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.461690 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-09 00:57:34.461696 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-09 00:57:34.461702 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-09 00:57:34.461708 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.461715 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-09 00:57:34.461721 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-09 00:57:34.461727 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-09 00:57:34.461733 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.461739 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-09 00:57:34.461746 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-09 00:57:34.461752 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-09 00:57:34.461758 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.461769 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-09 00:57:34.461775 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-09 00:57:34.461781 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-09 00:57:34.461788 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.461794 | orchestrator | 2026-03-09 00:57:34.461800 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-09 00:57:34.461806 | orchestrator | Monday 09 March 2026 00:48:13 +0000 (0:00:00.729) 0:02:23.415 ********** 2026-03-09 00:57:34.461812 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.461818 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.461825 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.461831 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.461837 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.461843 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.461849 | orchestrator | 2026-03-09 00:57:34.461855 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-09 00:57:34.461880 | orchestrator | Monday 09 March 2026 00:48:14 +0000 (0:00:00.821) 0:02:24.237 ********** 2026-03-09 00:57:34.461887 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.461894 | orchestrator | 2026-03-09 00:57:34.461900 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-09 00:57:34.461906 | orchestrator | Monday 09 March 2026 00:48:14 +0000 (0:00:00.167) 0:02:24.404 ********** 2026-03-09 00:57:34.461912 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.461919 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.461925 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.461931 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.461937 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.461948 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.461954 | orchestrator | 2026-03-09 00:57:34.461961 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-09 00:57:34.461967 | orchestrator | Monday 09 March 2026 00:48:15 +0000 (0:00:00.804) 0:02:25.208 ********** 2026-03-09 00:57:34.461974 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.461985 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.461995 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.462005 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.462049 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.462059 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.462069 | orchestrator | 2026-03-09 00:57:34.462080 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-09 00:57:34.462090 | orchestrator | Monday 09 March 2026 00:48:16 +0000 (0:00:00.966) 0:02:26.175 ********** 2026-03-09 00:57:34.462101 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.462113 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.462123 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.462133 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.462143 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.462152 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.462161 | orchestrator | 2026-03-09 00:57:34.462171 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-09 00:57:34.462181 | orchestrator | Monday 09 March 2026 00:48:17 +0000 (0:00:00.668) 0:02:26.843 ********** 2026-03-09 00:57:34.462190 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.462199 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.462208 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.462218 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.462228 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.462238 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.462247 | orchestrator | 2026-03-09 00:57:34.462257 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-09 00:57:34.462267 | orchestrator | Monday 09 March 2026 00:48:19 +0000 (0:00:02.757) 0:02:29.601 ********** 2026-03-09 00:57:34.462277 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.462287 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.462297 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.462308 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.462318 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.462329 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.462339 | orchestrator | 2026-03-09 00:57:34.462350 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-09 00:57:34.462361 | orchestrator | Monday 09 March 2026 00:48:20 +0000 (0:00:00.580) 0:02:30.182 ********** 2026-03-09 00:57:34.462372 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.462384 | orchestrator | 2026-03-09 00:57:34.462395 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-09 00:57:34.462414 | orchestrator | Monday 09 March 2026 00:48:21 +0000 (0:00:01.083) 0:02:31.265 ********** 2026-03-09 00:57:34.462425 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.462436 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.462447 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.462457 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.462469 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.462480 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.462491 | orchestrator | 2026-03-09 00:57:34.462503 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-09 00:57:34.462514 | orchestrator | Monday 09 March 2026 00:48:22 +0000 (0:00:00.692) 0:02:31.958 ********** 2026-03-09 00:57:34.462525 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.462534 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.462559 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.462570 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.462581 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.462592 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.462604 | orchestrator | 2026-03-09 00:57:34.462614 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-09 00:57:34.462625 | orchestrator | Monday 09 March 2026 00:48:22 +0000 (0:00:00.557) 0:02:32.515 ********** 2026-03-09 00:57:34.462635 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.462646 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.462669 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.462680 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.462691 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.462701 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.462711 | orchestrator | 2026-03-09 00:57:34.462720 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-09 00:57:34.462730 | orchestrator | Monday 09 March 2026 00:48:23 +0000 (0:00:00.650) 0:02:33.166 ********** 2026-03-09 00:57:34.462740 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.462750 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.462760 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.462770 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.462779 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.462789 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.462799 | orchestrator | 2026-03-09 00:57:34.462810 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-09 00:57:34.462822 | orchestrator | Monday 09 March 2026 00:48:24 +0000 (0:00:00.510) 0:02:33.677 ********** 2026-03-09 00:57:34.462834 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.462845 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.462856 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.462890 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.462896 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.462902 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.462908 | orchestrator | 2026-03-09 00:57:34.462915 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-09 00:57:34.462921 | orchestrator | Monday 09 March 2026 00:48:24 +0000 (0:00:00.604) 0:02:34.281 ********** 2026-03-09 00:57:34.462927 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.462933 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.462941 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.462950 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.462960 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.462969 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.462979 | orchestrator | 2026-03-09 00:57:34.462990 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-09 00:57:34.462997 | orchestrator | Monday 09 March 2026 00:48:25 +0000 (0:00:00.437) 0:02:34.719 ********** 2026-03-09 00:57:34.463003 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.463009 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.463015 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.463021 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.463028 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.463034 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.463040 | orchestrator | 2026-03-09 00:57:34.463046 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-09 00:57:34.463052 | orchestrator | Monday 09 March 2026 00:48:25 +0000 (0:00:00.662) 0:02:35.382 ********** 2026-03-09 00:57:34.463058 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.463064 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.463071 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.463077 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.463091 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.463097 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.463103 | orchestrator | 2026-03-09 00:57:34.463110 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-09 00:57:34.463116 | orchestrator | Monday 09 March 2026 00:48:26 +0000 (0:00:00.597) 0:02:35.979 ********** 2026-03-09 00:57:34.463122 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.463128 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.463134 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.463141 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.463147 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.463153 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.463159 | orchestrator | 2026-03-09 00:57:34.463165 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-09 00:57:34.463172 | orchestrator | Monday 09 March 2026 00:48:27 +0000 (0:00:01.068) 0:02:37.048 ********** 2026-03-09 00:57:34.463178 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.463186 | orchestrator | 2026-03-09 00:57:34.463192 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-09 00:57:34.463198 | orchestrator | Monday 09 March 2026 00:48:28 +0000 (0:00:01.010) 0:02:38.058 ********** 2026-03-09 00:57:34.463204 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-09 00:57:34.463211 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-09 00:57:34.463217 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-09 00:57:34.463224 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-09 00:57:34.463239 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-09 00:57:34.463249 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-09 00:57:34.463258 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-09 00:57:34.463268 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-09 00:57:34.463278 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-09 00:57:34.463288 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-09 00:57:34.463297 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-09 00:57:34.463308 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-09 00:57:34.463318 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-09 00:57:34.463327 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-09 00:57:34.463337 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-09 00:57:34.463347 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-09 00:57:34.463355 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-09 00:57:34.463365 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-09 00:57:34.463385 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-09 00:57:34.463396 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-09 00:57:34.463406 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-09 00:57:34.463416 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-09 00:57:34.463426 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-09 00:57:34.463436 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-09 00:57:34.463446 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-09 00:57:34.463457 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-09 00:57:34.463467 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-09 00:57:34.463476 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-09 00:57:34.463486 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-09 00:57:34.463507 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-09 00:57:34.463517 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-09 00:57:34.463528 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-09 00:57:34.463537 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-09 00:57:34.463547 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-09 00:57:34.463556 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-09 00:57:34.463566 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-09 00:57:34.463576 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-09 00:57:34.463585 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-09 00:57:34.463596 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-09 00:57:34.463606 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-09 00:57:34.463616 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-09 00:57:34.463626 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-09 00:57:34.463636 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-09 00:57:34.463646 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-09 00:57:34.463655 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-09 00:57:34.463665 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-09 00:57:34.463675 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-09 00:57:34.463686 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-09 00:57:34.463697 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-09 00:57:34.463707 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-09 00:57:34.463717 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-09 00:57:34.463726 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-09 00:57:34.463736 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-09 00:57:34.463746 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-09 00:57:34.463757 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-09 00:57:34.463767 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-09 00:57:34.463777 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-09 00:57:34.463787 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-09 00:57:34.463797 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-09 00:57:34.463808 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-09 00:57:34.463818 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-09 00:57:34.463830 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-09 00:57:34.463840 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-09 00:57:34.463852 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-09 00:57:34.463880 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-09 00:57:34.463892 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-09 00:57:34.463903 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-09 00:57:34.463914 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-09 00:57:34.463920 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-09 00:57:34.463927 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-09 00:57:34.463949 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-09 00:57:34.463959 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-09 00:57:34.463970 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-09 00:57:34.463979 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-09 00:57:34.463986 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-09 00:57:34.463992 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-09 00:57:34.464008 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-09 00:57:34.464014 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-09 00:57:34.464021 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-09 00:57:34.464027 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-09 00:57:34.464033 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-09 00:57:34.464040 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-09 00:57:34.464046 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-09 00:57:34.464052 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-09 00:57:34.464059 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-09 00:57:34.464065 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-09 00:57:34.464071 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-09 00:57:34.464077 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-09 00:57:34.464084 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-09 00:57:34.464090 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-09 00:57:34.464096 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-09 00:57:34.464103 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-09 00:57:34.464109 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-09 00:57:34.464115 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-09 00:57:34.464121 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-09 00:57:34.464127 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-09 00:57:34.464134 | orchestrator | 2026-03-09 00:57:34.464140 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-09 00:57:34.464146 | orchestrator | Monday 09 March 2026 00:48:35 +0000 (0:00:06.907) 0:02:44.965 ********** 2026-03-09 00:57:34.464152 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.464159 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.464165 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.464172 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.464179 | orchestrator | 2026-03-09 00:57:34.464245 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-09 00:57:34.464262 | orchestrator | Monday 09 March 2026 00:48:36 +0000 (0:00:01.083) 0:02:46.048 ********** 2026-03-09 00:57:34.464268 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-09 00:57:34.464275 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-09 00:57:34.464282 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-09 00:57:34.464288 | orchestrator | 2026-03-09 00:57:34.464294 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-09 00:57:34.464306 | orchestrator | Monday 09 March 2026 00:48:37 +0000 (0:00:01.216) 0:02:47.265 ********** 2026-03-09 00:57:34.464312 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-09 00:57:34.464319 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-09 00:57:34.464325 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-09 00:57:34.464331 | orchestrator | 2026-03-09 00:57:34.464338 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-09 00:57:34.464344 | orchestrator | Monday 09 March 2026 00:48:39 +0000 (0:00:01.486) 0:02:48.751 ********** 2026-03-09 00:57:34.464353 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.464364 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.464374 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.464389 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.464398 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.464408 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.464419 | orchestrator | 2026-03-09 00:57:34.464428 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-09 00:57:34.464437 | orchestrator | Monday 09 March 2026 00:48:39 +0000 (0:00:00.630) 0:02:49.381 ********** 2026-03-09 00:57:34.464446 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.464456 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.464467 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.464477 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.464487 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.464498 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.464508 | orchestrator | 2026-03-09 00:57:34.464517 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-09 00:57:34.464527 | orchestrator | Monday 09 March 2026 00:48:40 +0000 (0:00:00.861) 0:02:50.242 ********** 2026-03-09 00:57:34.464537 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.464548 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.464558 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.464568 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.464578 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.464588 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.464598 | orchestrator | 2026-03-09 00:57:34.464619 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-09 00:57:34.464630 | orchestrator | Monday 09 March 2026 00:48:41 +0000 (0:00:00.659) 0:02:50.902 ********** 2026-03-09 00:57:34.464640 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.464650 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.464660 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.464671 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.464678 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.464684 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.464690 | orchestrator | 2026-03-09 00:57:34.464696 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-09 00:57:34.464703 | orchestrator | Monday 09 March 2026 00:48:42 +0000 (0:00:00.785) 0:02:51.688 ********** 2026-03-09 00:57:34.464709 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.464716 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.464727 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.464737 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.464746 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.464756 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.464765 | orchestrator | 2026-03-09 00:57:34.464775 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-09 00:57:34.464785 | orchestrator | Monday 09 March 2026 00:48:42 +0000 (0:00:00.586) 0:02:52.274 ********** 2026-03-09 00:57:34.464804 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.464813 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.464822 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.464832 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.464841 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.464851 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.464885 | orchestrator | 2026-03-09 00:57:34.464896 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-09 00:57:34.464906 | orchestrator | Monday 09 March 2026 00:48:43 +0000 (0:00:00.825) 0:02:53.100 ********** 2026-03-09 00:57:34.464917 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.464927 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.464938 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.464948 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.464958 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.464968 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.464979 | orchestrator | 2026-03-09 00:57:34.464989 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-09 00:57:34.465001 | orchestrator | Monday 09 March 2026 00:48:44 +0000 (0:00:00.764) 0:02:53.864 ********** 2026-03-09 00:57:34.465011 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.465022 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.465033 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.465043 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.465054 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.465079 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.465099 | orchestrator | 2026-03-09 00:57:34.465110 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-09 00:57:34.465121 | orchestrator | Monday 09 March 2026 00:48:45 +0000 (0:00:00.859) 0:02:54.724 ********** 2026-03-09 00:57:34.465132 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.465143 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.465153 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.465164 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.465174 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.465184 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.465194 | orchestrator | 2026-03-09 00:57:34.465205 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-09 00:57:34.465216 | orchestrator | Monday 09 March 2026 00:48:48 +0000 (0:00:03.614) 0:02:58.338 ********** 2026-03-09 00:57:34.465227 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.465237 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.465248 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.465259 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.465269 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.465280 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.465291 | orchestrator | 2026-03-09 00:57:34.465302 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-09 00:57:34.465312 | orchestrator | Monday 09 March 2026 00:48:49 +0000 (0:00:01.038) 0:02:59.377 ********** 2026-03-09 00:57:34.465323 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.465334 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.465344 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.465355 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.465366 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.465376 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.465387 | orchestrator | 2026-03-09 00:57:34.465404 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-09 00:57:34.465415 | orchestrator | Monday 09 March 2026 00:48:50 +0000 (0:00:00.660) 0:03:00.037 ********** 2026-03-09 00:57:34.465425 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.465434 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.465451 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.465462 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.465473 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.465483 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.465493 | orchestrator | 2026-03-09 00:57:34.465504 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-09 00:57:34.465515 | orchestrator | Monday 09 March 2026 00:48:51 +0000 (0:00:00.891) 0:03:00.929 ********** 2026-03-09 00:57:34.465525 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-09 00:57:34.465537 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-09 00:57:34.465547 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-09 00:57:34.465558 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.465579 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.465590 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.465601 | orchestrator | 2026-03-09 00:57:34.465611 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-09 00:57:34.465622 | orchestrator | Monday 09 March 2026 00:48:52 +0000 (0:00:00.720) 0:03:01.650 ********** 2026-03-09 00:57:34.465635 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-09 00:57:34.465649 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-09 00:57:34.465661 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-09 00:57:34.465672 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-09 00:57:34.465683 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.465693 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-09 00:57:34.465704 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-09 00:57:34.465714 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.465724 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.465734 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.465744 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.465754 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.465764 | orchestrator | 2026-03-09 00:57:34.465774 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-09 00:57:34.465785 | orchestrator | Monday 09 March 2026 00:48:53 +0000 (0:00:01.476) 0:03:03.126 ********** 2026-03-09 00:57:34.465799 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.465809 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.465819 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.465829 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.465839 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.465849 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.465940 | orchestrator | 2026-03-09 00:57:34.465953 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-09 00:57:34.465963 | orchestrator | Monday 09 March 2026 00:48:54 +0000 (0:00:00.918) 0:03:04.045 ********** 2026-03-09 00:57:34.465973 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.465983 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.465993 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.466003 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.466149 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.466161 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.466170 | orchestrator | 2026-03-09 00:57:34.466187 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-09 00:57:34.466197 | orchestrator | Monday 09 March 2026 00:48:55 +0000 (0:00:01.298) 0:03:05.343 ********** 2026-03-09 00:57:34.466206 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.466215 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.466225 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.466234 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.466243 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.466253 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.466262 | orchestrator | 2026-03-09 00:57:34.466272 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-09 00:57:34.466281 | orchestrator | Monday 09 March 2026 00:48:56 +0000 (0:00:00.831) 0:03:06.175 ********** 2026-03-09 00:57:34.466290 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.466300 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.466309 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.466318 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.466328 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.466337 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.466346 | orchestrator | 2026-03-09 00:57:34.466356 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-09 00:57:34.466397 | orchestrator | Monday 09 March 2026 00:48:57 +0000 (0:00:00.787) 0:03:06.963 ********** 2026-03-09 00:57:34.466408 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.466417 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.466427 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.466436 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.466445 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.466455 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.466464 | orchestrator | 2026-03-09 00:57:34.466473 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-09 00:57:34.466483 | orchestrator | Monday 09 March 2026 00:48:57 +0000 (0:00:00.614) 0:03:07.577 ********** 2026-03-09 00:57:34.466492 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.466502 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.466511 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.466520 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.466530 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.466538 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.466547 | orchestrator | 2026-03-09 00:57:34.466556 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-09 00:57:34.466566 | orchestrator | Monday 09 March 2026 00:48:59 +0000 (0:00:01.570) 0:03:09.148 ********** 2026-03-09 00:57:34.466575 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:57:34.466591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:57:34.466601 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:57:34.466610 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.466618 | orchestrator | 2026-03-09 00:57:34.466628 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-09 00:57:34.466636 | orchestrator | Monday 09 March 2026 00:48:59 +0000 (0:00:00.496) 0:03:09.645 ********** 2026-03-09 00:57:34.466645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:57:34.466654 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:57:34.466662 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:57:34.466671 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.466681 | orchestrator | 2026-03-09 00:57:34.466690 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-09 00:57:34.466699 | orchestrator | Monday 09 March 2026 00:49:00 +0000 (0:00:00.493) 0:03:10.138 ********** 2026-03-09 00:57:34.466708 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:57:34.466718 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:57:34.466726 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:57:34.466735 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.466744 | orchestrator | 2026-03-09 00:57:34.466753 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-09 00:57:34.466762 | orchestrator | Monday 09 March 2026 00:49:00 +0000 (0:00:00.395) 0:03:10.533 ********** 2026-03-09 00:57:34.466771 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.466781 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.466790 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.466799 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.466809 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.466818 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.466828 | orchestrator | 2026-03-09 00:57:34.466836 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-09 00:57:34.466846 | orchestrator | Monday 09 March 2026 00:49:01 +0000 (0:00:00.835) 0:03:11.369 ********** 2026-03-09 00:57:34.466854 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-09 00:57:34.466882 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-09 00:57:34.466890 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-09 00:57:34.466900 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.466910 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-09 00:57:34.466920 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-09 00:57:34.466931 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-09 00:57:34.466941 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.466951 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.466962 | orchestrator | 2026-03-09 00:57:34.466971 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-09 00:57:34.466980 | orchestrator | Monday 09 March 2026 00:49:05 +0000 (0:00:03.713) 0:03:15.083 ********** 2026-03-09 00:57:34.466990 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.467000 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.467010 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.467021 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.467031 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.467041 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.467052 | orchestrator | 2026-03-09 00:57:34.467066 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-09 00:57:34.467076 | orchestrator | Monday 09 March 2026 00:49:08 +0000 (0:00:02.968) 0:03:18.051 ********** 2026-03-09 00:57:34.467085 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.467095 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.467106 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.467124 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.467135 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.467144 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.467154 | orchestrator | 2026-03-09 00:57:34.467164 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-09 00:57:34.467174 | orchestrator | Monday 09 March 2026 00:49:09 +0000 (0:00:01.345) 0:03:19.397 ********** 2026-03-09 00:57:34.467184 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.467195 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.467206 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.467217 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.467228 | orchestrator | 2026-03-09 00:57:34.467237 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-09 00:57:34.467276 | orchestrator | Monday 09 March 2026 00:49:11 +0000 (0:00:01.255) 0:03:20.652 ********** 2026-03-09 00:57:34.467288 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.467297 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.467307 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.467316 | orchestrator | 2026-03-09 00:57:34.467325 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-09 00:57:34.467334 | orchestrator | Monday 09 March 2026 00:49:11 +0000 (0:00:00.557) 0:03:21.210 ********** 2026-03-09 00:57:34.467343 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.467352 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.467362 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.467372 | orchestrator | 2026-03-09 00:57:34.467381 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-09 00:57:34.467391 | orchestrator | Monday 09 March 2026 00:49:12 +0000 (0:00:01.338) 0:03:22.549 ********** 2026-03-09 00:57:34.467400 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-09 00:57:34.467410 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-09 00:57:34.467418 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-09 00:57:34.467427 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.467436 | orchestrator | 2026-03-09 00:57:34.467446 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-09 00:57:34.467456 | orchestrator | Monday 09 March 2026 00:49:14 +0000 (0:00:01.300) 0:03:23.849 ********** 2026-03-09 00:57:34.467465 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.467475 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.467484 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.467493 | orchestrator | 2026-03-09 00:57:34.467502 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-09 00:57:34.467511 | orchestrator | Monday 09 March 2026 00:49:14 +0000 (0:00:00.513) 0:03:24.362 ********** 2026-03-09 00:57:34.467520 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.467529 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.467536 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.467544 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.467551 | orchestrator | 2026-03-09 00:57:34.467560 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-09 00:57:34.467569 | orchestrator | Monday 09 March 2026 00:49:16 +0000 (0:00:01.391) 0:03:25.754 ********** 2026-03-09 00:57:34.467579 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:57:34.467588 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:57:34.467598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:57:34.467608 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.467617 | orchestrator | 2026-03-09 00:57:34.467627 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-09 00:57:34.467636 | orchestrator | Monday 09 March 2026 00:49:16 +0000 (0:00:00.469) 0:03:26.224 ********** 2026-03-09 00:57:34.467651 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.467661 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.467671 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.467680 | orchestrator | 2026-03-09 00:57:34.467688 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-09 00:57:34.467697 | orchestrator | Monday 09 March 2026 00:49:16 +0000 (0:00:00.412) 0:03:26.637 ********** 2026-03-09 00:57:34.467707 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.467716 | orchestrator | 2026-03-09 00:57:34.467725 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-09 00:57:34.467734 | orchestrator | Monday 09 March 2026 00:49:17 +0000 (0:00:00.245) 0:03:26.882 ********** 2026-03-09 00:57:34.467744 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.467753 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.467763 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.467772 | orchestrator | 2026-03-09 00:57:34.467781 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-09 00:57:34.467791 | orchestrator | Monday 09 March 2026 00:49:17 +0000 (0:00:00.535) 0:03:27.418 ********** 2026-03-09 00:57:34.467799 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.467808 | orchestrator | 2026-03-09 00:57:34.467817 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-09 00:57:34.467827 | orchestrator | Monday 09 March 2026 00:49:18 +0000 (0:00:00.381) 0:03:27.800 ********** 2026-03-09 00:57:34.467836 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.467845 | orchestrator | 2026-03-09 00:57:34.467855 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-09 00:57:34.467911 | orchestrator | Monday 09 March 2026 00:49:18 +0000 (0:00:00.245) 0:03:28.046 ********** 2026-03-09 00:57:34.467921 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.467936 | orchestrator | 2026-03-09 00:57:34.467946 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-09 00:57:34.467955 | orchestrator | Monday 09 March 2026 00:49:18 +0000 (0:00:00.119) 0:03:28.165 ********** 2026-03-09 00:57:34.467964 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.467972 | orchestrator | 2026-03-09 00:57:34.467981 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-09 00:57:34.467991 | orchestrator | Monday 09 March 2026 00:49:19 +0000 (0:00:00.825) 0:03:28.990 ********** 2026-03-09 00:57:34.468000 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.468010 | orchestrator | 2026-03-09 00:57:34.468019 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-09 00:57:34.468028 | orchestrator | Monday 09 March 2026 00:49:19 +0000 (0:00:00.246) 0:03:29.237 ********** 2026-03-09 00:57:34.468038 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:57:34.468047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:57:34.468055 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:57:34.468064 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.468074 | orchestrator | 2026-03-09 00:57:34.468082 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-09 00:57:34.468120 | orchestrator | Monday 09 March 2026 00:49:20 +0000 (0:00:00.469) 0:03:29.707 ********** 2026-03-09 00:57:34.468131 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.468140 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.468149 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.468158 | orchestrator | 2026-03-09 00:57:34.468168 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-09 00:57:34.468177 | orchestrator | Monday 09 March 2026 00:49:20 +0000 (0:00:00.568) 0:03:30.275 ********** 2026-03-09 00:57:34.468185 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.468194 | orchestrator | 2026-03-09 00:57:34.468202 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-09 00:57:34.468217 | orchestrator | Monday 09 March 2026 00:49:20 +0000 (0:00:00.227) 0:03:30.502 ********** 2026-03-09 00:57:34.468225 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.468233 | orchestrator | 2026-03-09 00:57:34.468241 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-09 00:57:34.468250 | orchestrator | Monday 09 March 2026 00:49:21 +0000 (0:00:00.246) 0:03:30.749 ********** 2026-03-09 00:57:34.468258 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.468266 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.468274 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.468283 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.468291 | orchestrator | 2026-03-09 00:57:34.468300 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-09 00:57:34.468308 | orchestrator | Monday 09 March 2026 00:49:22 +0000 (0:00:01.281) 0:03:32.030 ********** 2026-03-09 00:57:34.468315 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.468323 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.468332 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.468340 | orchestrator | 2026-03-09 00:57:34.468348 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-09 00:57:34.468357 | orchestrator | Monday 09 March 2026 00:49:22 +0000 (0:00:00.383) 0:03:32.414 ********** 2026-03-09 00:57:34.468365 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.468374 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.468382 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.468390 | orchestrator | 2026-03-09 00:57:34.468397 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-09 00:57:34.468405 | orchestrator | Monday 09 March 2026 00:49:24 +0000 (0:00:01.464) 0:03:33.879 ********** 2026-03-09 00:57:34.468413 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:57:34.468422 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:57:34.468430 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:57:34.468439 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.468447 | orchestrator | 2026-03-09 00:57:34.468455 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-09 00:57:34.468463 | orchestrator | Monday 09 March 2026 00:49:25 +0000 (0:00:00.894) 0:03:34.774 ********** 2026-03-09 00:57:34.468472 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.468479 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.468487 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.468495 | orchestrator | 2026-03-09 00:57:34.468504 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-09 00:57:34.468512 | orchestrator | Monday 09 March 2026 00:49:25 +0000 (0:00:00.655) 0:03:35.429 ********** 2026-03-09 00:57:34.468520 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.468529 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.468537 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.468545 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.468554 | orchestrator | 2026-03-09 00:57:34.468562 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-09 00:57:34.468570 | orchestrator | Monday 09 March 2026 00:49:26 +0000 (0:00:00.826) 0:03:36.256 ********** 2026-03-09 00:57:34.468578 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.468586 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.468593 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.468599 | orchestrator | 2026-03-09 00:57:34.468608 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-09 00:57:34.468616 | orchestrator | Monday 09 March 2026 00:49:27 +0000 (0:00:00.626) 0:03:36.883 ********** 2026-03-09 00:57:34.468625 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.468643 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.468651 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.468659 | orchestrator | 2026-03-09 00:57:34.468667 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-09 00:57:34.468679 | orchestrator | Monday 09 March 2026 00:49:28 +0000 (0:00:01.644) 0:03:38.527 ********** 2026-03-09 00:57:34.468686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:57:34.468694 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:57:34.468703 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:57:34.468711 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.468719 | orchestrator | 2026-03-09 00:57:34.468728 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-09 00:57:34.468735 | orchestrator | Monday 09 March 2026 00:49:29 +0000 (0:00:00.741) 0:03:39.269 ********** 2026-03-09 00:57:34.468743 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.468751 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.468760 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.468768 | orchestrator | 2026-03-09 00:57:34.468776 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-09 00:57:34.468785 | orchestrator | Monday 09 March 2026 00:49:29 +0000 (0:00:00.349) 0:03:39.619 ********** 2026-03-09 00:57:34.468793 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.468801 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.468810 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.468818 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.468826 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.468874 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.468886 | orchestrator | 2026-03-09 00:57:34.468894 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-09 00:57:34.468903 | orchestrator | Monday 09 March 2026 00:49:31 +0000 (0:00:01.055) 0:03:40.674 ********** 2026-03-09 00:57:34.468912 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.468920 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.468929 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.468939 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.468948 | orchestrator | 2026-03-09 00:57:34.468957 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-09 00:57:34.468966 | orchestrator | Monday 09 March 2026 00:49:31 +0000 (0:00:00.852) 0:03:41.527 ********** 2026-03-09 00:57:34.468975 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.468984 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.468993 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.469002 | orchestrator | 2026-03-09 00:57:34.469011 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-09 00:57:34.469021 | orchestrator | Monday 09 March 2026 00:49:32 +0000 (0:00:00.619) 0:03:42.146 ********** 2026-03-09 00:57:34.469030 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.469039 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.469049 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.469058 | orchestrator | 2026-03-09 00:57:34.469067 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-09 00:57:34.469076 | orchestrator | Monday 09 March 2026 00:49:33 +0000 (0:00:01.243) 0:03:43.390 ********** 2026-03-09 00:57:34.469085 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-09 00:57:34.469094 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-09 00:57:34.469102 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-09 00:57:34.469111 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.469119 | orchestrator | 2026-03-09 00:57:34.469128 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-09 00:57:34.469136 | orchestrator | Monday 09 March 2026 00:49:34 +0000 (0:00:00.746) 0:03:44.136 ********** 2026-03-09 00:57:34.469151 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.469158 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.469167 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.469174 | orchestrator | 2026-03-09 00:57:34.469183 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-09 00:57:34.469192 | orchestrator | 2026-03-09 00:57:34.469200 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-09 00:57:34.469208 | orchestrator | Monday 09 March 2026 00:49:35 +0000 (0:00:00.631) 0:03:44.767 ********** 2026-03-09 00:57:34.469217 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.469226 | orchestrator | 2026-03-09 00:57:34.469234 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-09 00:57:34.469242 | orchestrator | Monday 09 March 2026 00:49:36 +0000 (0:00:00.898) 0:03:45.666 ********** 2026-03-09 00:57:34.469250 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.469258 | orchestrator | 2026-03-09 00:57:34.469267 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-09 00:57:34.469275 | orchestrator | Monday 09 March 2026 00:49:36 +0000 (0:00:00.579) 0:03:46.245 ********** 2026-03-09 00:57:34.469283 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.469292 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.469300 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.469309 | orchestrator | 2026-03-09 00:57:34.469317 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-09 00:57:34.469325 | orchestrator | Monday 09 March 2026 00:49:38 +0000 (0:00:02.068) 0:03:48.314 ********** 2026-03-09 00:57:34.469333 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.469341 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.469350 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.469358 | orchestrator | 2026-03-09 00:57:34.469367 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-09 00:57:34.469375 | orchestrator | Monday 09 March 2026 00:49:39 +0000 (0:00:00.354) 0:03:48.668 ********** 2026-03-09 00:57:34.469383 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.469392 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.469400 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.469408 | orchestrator | 2026-03-09 00:57:34.469416 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-09 00:57:34.469428 | orchestrator | Monday 09 March 2026 00:49:39 +0000 (0:00:00.441) 0:03:49.110 ********** 2026-03-09 00:57:34.469438 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.469446 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.469454 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.469462 | orchestrator | 2026-03-09 00:57:34.469471 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-09 00:57:34.469479 | orchestrator | Monday 09 March 2026 00:49:39 +0000 (0:00:00.421) 0:03:49.532 ********** 2026-03-09 00:57:34.469487 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.469495 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.469503 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.469511 | orchestrator | 2026-03-09 00:57:34.469520 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-09 00:57:34.469528 | orchestrator | Monday 09 March 2026 00:49:41 +0000 (0:00:01.249) 0:03:50.782 ********** 2026-03-09 00:57:34.469537 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.469545 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.469553 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.469562 | orchestrator | 2026-03-09 00:57:34.469570 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-09 00:57:34.469578 | orchestrator | Monday 09 March 2026 00:49:41 +0000 (0:00:00.383) 0:03:51.165 ********** 2026-03-09 00:57:34.469616 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.469625 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.469634 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.469642 | orchestrator | 2026-03-09 00:57:34.469650 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-09 00:57:34.469658 | orchestrator | Monday 09 March 2026 00:49:41 +0000 (0:00:00.305) 0:03:51.471 ********** 2026-03-09 00:57:34.469666 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.469674 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.469683 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.469692 | orchestrator | 2026-03-09 00:57:34.469700 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-09 00:57:34.469708 | orchestrator | Monday 09 March 2026 00:49:42 +0000 (0:00:00.939) 0:03:52.411 ********** 2026-03-09 00:57:34.469717 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.469725 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.469733 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.469741 | orchestrator | 2026-03-09 00:57:34.469748 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-09 00:57:34.469757 | orchestrator | Monday 09 March 2026 00:49:43 +0000 (0:00:00.952) 0:03:53.364 ********** 2026-03-09 00:57:34.469765 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.469774 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.469782 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.469791 | orchestrator | 2026-03-09 00:57:34.469799 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-09 00:57:34.469808 | orchestrator | Monday 09 March 2026 00:49:44 +0000 (0:00:00.329) 0:03:53.694 ********** 2026-03-09 00:57:34.469816 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.469824 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.469832 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.469840 | orchestrator | 2026-03-09 00:57:34.469848 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-09 00:57:34.469857 | orchestrator | Monday 09 March 2026 00:49:44 +0000 (0:00:00.394) 0:03:54.088 ********** 2026-03-09 00:57:34.469880 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.469889 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.469897 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.469906 | orchestrator | 2026-03-09 00:57:34.469913 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-09 00:57:34.469921 | orchestrator | Monday 09 March 2026 00:49:44 +0000 (0:00:00.332) 0:03:54.421 ********** 2026-03-09 00:57:34.469929 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.469938 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.469946 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.469955 | orchestrator | 2026-03-09 00:57:34.469963 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-09 00:57:34.469971 | orchestrator | Monday 09 March 2026 00:49:45 +0000 (0:00:00.329) 0:03:54.751 ********** 2026-03-09 00:57:34.469980 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.469988 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.469995 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.470003 | orchestrator | 2026-03-09 00:57:34.470033 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-09 00:57:34.470044 | orchestrator | Monday 09 March 2026 00:49:45 +0000 (0:00:00.559) 0:03:55.310 ********** 2026-03-09 00:57:34.470052 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.470060 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.470069 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.470077 | orchestrator | 2026-03-09 00:57:34.470084 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-09 00:57:34.470093 | orchestrator | Monday 09 March 2026 00:49:45 +0000 (0:00:00.286) 0:03:55.597 ********** 2026-03-09 00:57:34.470101 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.470115 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.470124 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.470132 | orchestrator | 2026-03-09 00:57:34.470140 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-09 00:57:34.470148 | orchestrator | Monday 09 March 2026 00:49:46 +0000 (0:00:00.329) 0:03:55.927 ********** 2026-03-09 00:57:34.470157 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.470164 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.470172 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.470181 | orchestrator | 2026-03-09 00:57:34.470189 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-09 00:57:34.470197 | orchestrator | Monday 09 March 2026 00:49:46 +0000 (0:00:00.335) 0:03:56.263 ********** 2026-03-09 00:57:34.470206 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.470214 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.470222 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.470230 | orchestrator | 2026-03-09 00:57:34.470239 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-09 00:57:34.470251 | orchestrator | Monday 09 March 2026 00:49:47 +0000 (0:00:00.614) 0:03:56.877 ********** 2026-03-09 00:57:34.470259 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.470267 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.470276 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.470284 | orchestrator | 2026-03-09 00:57:34.470292 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-09 00:57:34.470301 | orchestrator | Monday 09 March 2026 00:49:47 +0000 (0:00:00.592) 0:03:57.470 ********** 2026-03-09 00:57:34.470309 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.470317 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.470325 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.470333 | orchestrator | 2026-03-09 00:57:34.470342 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-09 00:57:34.470350 | orchestrator | Monday 09 March 2026 00:49:48 +0000 (0:00:00.366) 0:03:57.836 ********** 2026-03-09 00:57:34.470359 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.470367 | orchestrator | 2026-03-09 00:57:34.470375 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-09 00:57:34.470384 | orchestrator | Monday 09 March 2026 00:49:49 +0000 (0:00:00.922) 0:03:58.759 ********** 2026-03-09 00:57:34.470392 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.470401 | orchestrator | 2026-03-09 00:57:34.470433 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-09 00:57:34.470442 | orchestrator | Monday 09 March 2026 00:49:49 +0000 (0:00:00.179) 0:03:58.938 ********** 2026-03-09 00:57:34.470451 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-09 00:57:34.470460 | orchestrator | 2026-03-09 00:57:34.470468 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-09 00:57:34.470476 | orchestrator | Monday 09 March 2026 00:49:50 +0000 (0:00:01.182) 0:04:00.121 ********** 2026-03-09 00:57:34.470484 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.470493 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.470501 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.470510 | orchestrator | 2026-03-09 00:57:34.470518 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-09 00:57:34.470526 | orchestrator | Monday 09 March 2026 00:49:50 +0000 (0:00:00.360) 0:04:00.481 ********** 2026-03-09 00:57:34.470535 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.470543 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.470551 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.470559 | orchestrator | 2026-03-09 00:57:34.470568 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-09 00:57:34.470576 | orchestrator | Monday 09 March 2026 00:49:51 +0000 (0:00:00.353) 0:04:00.835 ********** 2026-03-09 00:57:34.470585 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.470599 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.470607 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.470615 | orchestrator | 2026-03-09 00:57:34.470624 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-09 00:57:34.470632 | orchestrator | Monday 09 March 2026 00:49:52 +0000 (0:00:01.685) 0:04:02.521 ********** 2026-03-09 00:57:34.470640 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.470649 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.470657 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.470665 | orchestrator | 2026-03-09 00:57:34.470674 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-09 00:57:34.470682 | orchestrator | Monday 09 March 2026 00:49:53 +0000 (0:00:00.830) 0:04:03.351 ********** 2026-03-09 00:57:34.470690 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.470698 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.470707 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.470715 | orchestrator | 2026-03-09 00:57:34.470723 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-09 00:57:34.470732 | orchestrator | Monday 09 March 2026 00:49:55 +0000 (0:00:01.526) 0:04:04.877 ********** 2026-03-09 00:57:34.470740 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.470748 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.470757 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.470765 | orchestrator | 2026-03-09 00:57:34.470773 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-09 00:57:34.470782 | orchestrator | Monday 09 March 2026 00:49:56 +0000 (0:00:00.783) 0:04:05.661 ********** 2026-03-09 00:57:34.470790 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.470798 | orchestrator | 2026-03-09 00:57:34.470806 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-09 00:57:34.470815 | orchestrator | Monday 09 March 2026 00:49:58 +0000 (0:00:02.178) 0:04:07.839 ********** 2026-03-09 00:57:34.470823 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.470831 | orchestrator | 2026-03-09 00:57:34.470840 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-09 00:57:34.470848 | orchestrator | Monday 09 March 2026 00:49:58 +0000 (0:00:00.692) 0:04:08.532 ********** 2026-03-09 00:57:34.470857 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 00:57:34.470912 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:57:34.470921 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:57:34.470930 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-09 00:57:34.470938 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-09 00:57:34.470947 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-09 00:57:34.470955 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-09 00:57:34.470963 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-09 00:57:34.470971 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-09 00:57:34.470980 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-09 00:57:34.470988 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-09 00:57:34.470996 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-09 00:57:34.471005 | orchestrator | 2026-03-09 00:57:34.471017 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-09 00:57:34.471025 | orchestrator | Monday 09 March 2026 00:50:02 +0000 (0:00:03.371) 0:04:11.903 ********** 2026-03-09 00:57:34.471034 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.471041 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.471049 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.471057 | orchestrator | 2026-03-09 00:57:34.471065 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-09 00:57:34.471073 | orchestrator | Monday 09 March 2026 00:50:03 +0000 (0:00:01.235) 0:04:13.139 ********** 2026-03-09 00:57:34.471086 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.471094 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.471102 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.471110 | orchestrator | 2026-03-09 00:57:34.471118 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-09 00:57:34.471126 | orchestrator | Monday 09 March 2026 00:50:03 +0000 (0:00:00.367) 0:04:13.507 ********** 2026-03-09 00:57:34.471134 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.471141 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.471149 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.471157 | orchestrator | 2026-03-09 00:57:34.471165 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-09 00:57:34.471173 | orchestrator | Monday 09 March 2026 00:50:04 +0000 (0:00:00.585) 0:04:14.092 ********** 2026-03-09 00:57:34.471181 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.471213 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.471221 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.471229 | orchestrator | 2026-03-09 00:57:34.471237 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-09 00:57:34.471245 | orchestrator | Monday 09 March 2026 00:50:06 +0000 (0:00:01.614) 0:04:15.706 ********** 2026-03-09 00:57:34.471253 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.471261 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.471269 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.471276 | orchestrator | 2026-03-09 00:57:34.471284 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-09 00:57:34.471292 | orchestrator | Monday 09 March 2026 00:50:07 +0000 (0:00:01.492) 0:04:17.199 ********** 2026-03-09 00:57:34.471300 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.471308 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.471316 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.471324 | orchestrator | 2026-03-09 00:57:34.471331 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-09 00:57:34.471339 | orchestrator | Monday 09 March 2026 00:50:07 +0000 (0:00:00.426) 0:04:17.626 ********** 2026-03-09 00:57:34.471347 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.471355 | orchestrator | 2026-03-09 00:57:34.471363 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-09 00:57:34.471370 | orchestrator | Monday 09 March 2026 00:50:08 +0000 (0:00:00.879) 0:04:18.505 ********** 2026-03-09 00:57:34.471378 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.471386 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.471394 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.471402 | orchestrator | 2026-03-09 00:57:34.471410 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-09 00:57:34.471418 | orchestrator | Monday 09 March 2026 00:50:09 +0000 (0:00:00.386) 0:04:18.892 ********** 2026-03-09 00:57:34.471426 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.471433 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.471441 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.471449 | orchestrator | 2026-03-09 00:57:34.471457 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-09 00:57:34.471465 | orchestrator | Monday 09 March 2026 00:50:09 +0000 (0:00:00.529) 0:04:19.421 ********** 2026-03-09 00:57:34.471473 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.471481 | orchestrator | 2026-03-09 00:57:34.471489 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-09 00:57:34.471497 | orchestrator | Monday 09 March 2026 00:50:10 +0000 (0:00:00.940) 0:04:20.362 ********** 2026-03-09 00:57:34.471504 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.471513 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.471524 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.471532 | orchestrator | 2026-03-09 00:57:34.471540 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-09 00:57:34.471548 | orchestrator | Monday 09 March 2026 00:50:12 +0000 (0:00:01.787) 0:04:22.149 ********** 2026-03-09 00:57:34.471556 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.471564 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.471572 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.471580 | orchestrator | 2026-03-09 00:57:34.471588 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-09 00:57:34.471596 | orchestrator | Monday 09 March 2026 00:50:13 +0000 (0:00:01.221) 0:04:23.370 ********** 2026-03-09 00:57:34.471603 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.471612 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.471619 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.471627 | orchestrator | 2026-03-09 00:57:34.471635 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-09 00:57:34.471643 | orchestrator | Monday 09 March 2026 00:50:15 +0000 (0:00:02.105) 0:04:25.476 ********** 2026-03-09 00:57:34.471651 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.471659 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.471666 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.471674 | orchestrator | 2026-03-09 00:57:34.471682 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-09 00:57:34.471689 | orchestrator | Monday 09 March 2026 00:50:18 +0000 (0:00:02.659) 0:04:28.135 ********** 2026-03-09 00:57:34.471696 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.471703 | orchestrator | 2026-03-09 00:57:34.471714 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-09 00:57:34.471721 | orchestrator | Monday 09 March 2026 00:50:19 +0000 (0:00:00.639) 0:04:28.774 ********** 2026-03-09 00:57:34.471728 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-09 00:57:34.471734 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.471742 | orchestrator | 2026-03-09 00:57:34.471750 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-09 00:57:34.471758 | orchestrator | Monday 09 March 2026 00:50:41 +0000 (0:00:22.324) 0:04:51.099 ********** 2026-03-09 00:57:34.471766 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.471774 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.471781 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.471789 | orchestrator | 2026-03-09 00:57:34.471797 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-09 00:57:34.471805 | orchestrator | Monday 09 March 2026 00:50:52 +0000 (0:00:11.076) 0:05:02.176 ********** 2026-03-09 00:57:34.471813 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.471821 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.471829 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.471837 | orchestrator | 2026-03-09 00:57:34.471845 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-09 00:57:34.471900 | orchestrator | Monday 09 March 2026 00:50:53 +0000 (0:00:01.470) 0:05:03.646 ********** 2026-03-09 00:57:34.471913 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c13e606e7dddf8d0902dd791150eb11d8b7b00cb'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-09 00:57:34.471923 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c13e606e7dddf8d0902dd791150eb11d8b7b00cb'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-09 00:57:34.471939 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c13e606e7dddf8d0902dd791150eb11d8b7b00cb'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-09 00:57:34.471949 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c13e606e7dddf8d0902dd791150eb11d8b7b00cb'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-09 00:57:34.471957 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c13e606e7dddf8d0902dd791150eb11d8b7b00cb'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-09 00:57:34.471967 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c13e606e7dddf8d0902dd791150eb11d8b7b00cb'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__c13e606e7dddf8d0902dd791150eb11d8b7b00cb'}])  2026-03-09 00:57:34.471976 | orchestrator | 2026-03-09 00:57:34.471984 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-09 00:57:34.471992 | orchestrator | Monday 09 March 2026 00:51:09 +0000 (0:00:15.627) 0:05:19.274 ********** 2026-03-09 00:57:34.472000 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.472008 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.472016 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.472024 | orchestrator | 2026-03-09 00:57:34.472032 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-09 00:57:34.472040 | orchestrator | Monday 09 March 2026 00:51:10 +0000 (0:00:00.483) 0:05:19.758 ********** 2026-03-09 00:57:34.472048 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.472056 | orchestrator | 2026-03-09 00:57:34.472064 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-09 00:57:34.472072 | orchestrator | Monday 09 March 2026 00:51:11 +0000 (0:00:01.156) 0:05:20.914 ********** 2026-03-09 00:57:34.472080 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.472088 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.472100 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.472108 | orchestrator | 2026-03-09 00:57:34.472116 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-09 00:57:34.472124 | orchestrator | Monday 09 March 2026 00:51:11 +0000 (0:00:00.321) 0:05:21.235 ********** 2026-03-09 00:57:34.472132 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.472140 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.472148 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.472156 | orchestrator | 2026-03-09 00:57:34.472164 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-09 00:57:34.472172 | orchestrator | Monday 09 March 2026 00:51:11 +0000 (0:00:00.378) 0:05:21.613 ********** 2026-03-09 00:57:34.472180 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-09 00:57:34.472188 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-09 00:57:34.472196 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-09 00:57:34.472209 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.472217 | orchestrator | 2026-03-09 00:57:34.472225 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-09 00:57:34.472233 | orchestrator | Monday 09 March 2026 00:51:12 +0000 (0:00:00.823) 0:05:22.437 ********** 2026-03-09 00:57:34.472241 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.472249 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.472278 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.472287 | orchestrator | 2026-03-09 00:57:34.472295 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-09 00:57:34.472302 | orchestrator | 2026-03-09 00:57:34.472310 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-09 00:57:34.472318 | orchestrator | Monday 09 March 2026 00:51:13 +0000 (0:00:00.741) 0:05:23.179 ********** 2026-03-09 00:57:34.472325 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.472333 | orchestrator | 2026-03-09 00:57:34.472340 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-09 00:57:34.472348 | orchestrator | Monday 09 March 2026 00:51:14 +0000 (0:00:00.511) 0:05:23.691 ********** 2026-03-09 00:57:34.472355 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.472363 | orchestrator | 2026-03-09 00:57:34.472370 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-09 00:57:34.472378 | orchestrator | Monday 09 March 2026 00:51:14 +0000 (0:00:00.693) 0:05:24.384 ********** 2026-03-09 00:57:34.472386 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.472393 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.472400 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.472408 | orchestrator | 2026-03-09 00:57:34.472416 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-09 00:57:34.472423 | orchestrator | Monday 09 March 2026 00:51:15 +0000 (0:00:00.723) 0:05:25.107 ********** 2026-03-09 00:57:34.472430 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.472438 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.472445 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.472453 | orchestrator | 2026-03-09 00:57:34.472460 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-09 00:57:34.472468 | orchestrator | Monday 09 March 2026 00:51:15 +0000 (0:00:00.314) 0:05:25.421 ********** 2026-03-09 00:57:34.472475 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.472483 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.472490 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.472498 | orchestrator | 2026-03-09 00:57:34.472505 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-09 00:57:34.472513 | orchestrator | Monday 09 March 2026 00:51:16 +0000 (0:00:00.484) 0:05:25.905 ********** 2026-03-09 00:57:34.472520 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.472528 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.472536 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.472543 | orchestrator | 2026-03-09 00:57:34.472550 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-09 00:57:34.472558 | orchestrator | Monday 09 March 2026 00:51:16 +0000 (0:00:00.318) 0:05:26.224 ********** 2026-03-09 00:57:34.472565 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.472573 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.472581 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.472588 | orchestrator | 2026-03-09 00:57:34.472596 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-09 00:57:34.472603 | orchestrator | Monday 09 March 2026 00:51:17 +0000 (0:00:00.693) 0:05:26.917 ********** 2026-03-09 00:57:34.472611 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.472618 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.472626 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.472641 | orchestrator | 2026-03-09 00:57:34.472649 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-09 00:57:34.472656 | orchestrator | Monday 09 March 2026 00:51:17 +0000 (0:00:00.297) 0:05:27.215 ********** 2026-03-09 00:57:34.472664 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.472672 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.472679 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.472687 | orchestrator | 2026-03-09 00:57:34.472694 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-09 00:57:34.472702 | orchestrator | Monday 09 March 2026 00:51:18 +0000 (0:00:00.494) 0:05:27.709 ********** 2026-03-09 00:57:34.472709 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.472717 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.472724 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.472732 | orchestrator | 2026-03-09 00:57:34.472739 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-09 00:57:34.472747 | orchestrator | Monday 09 March 2026 00:51:18 +0000 (0:00:00.784) 0:05:28.493 ********** 2026-03-09 00:57:34.472754 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.472762 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.472769 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.472777 | orchestrator | 2026-03-09 00:57:34.472788 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-09 00:57:34.472796 | orchestrator | Monday 09 March 2026 00:51:19 +0000 (0:00:00.784) 0:05:29.278 ********** 2026-03-09 00:57:34.472804 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.472811 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.472819 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.472826 | orchestrator | 2026-03-09 00:57:34.472834 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-09 00:57:34.472841 | orchestrator | Monday 09 March 2026 00:51:19 +0000 (0:00:00.322) 0:05:29.600 ********** 2026-03-09 00:57:34.472849 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.472856 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.472878 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.472886 | orchestrator | 2026-03-09 00:57:34.472894 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-09 00:57:34.472901 | orchestrator | Monday 09 March 2026 00:51:20 +0000 (0:00:00.488) 0:05:30.088 ********** 2026-03-09 00:57:34.472909 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.472916 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.472922 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.472929 | orchestrator | 2026-03-09 00:57:34.472936 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-09 00:57:34.472966 | orchestrator | Monday 09 March 2026 00:51:21 +0000 (0:00:00.636) 0:05:30.725 ********** 2026-03-09 00:57:34.472976 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.472983 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.472990 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.472996 | orchestrator | 2026-03-09 00:57:34.473003 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-09 00:57:34.473011 | orchestrator | Monday 09 March 2026 00:51:21 +0000 (0:00:00.368) 0:05:31.093 ********** 2026-03-09 00:57:34.473018 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.473024 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.473031 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.473039 | orchestrator | 2026-03-09 00:57:34.473046 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-09 00:57:34.473052 | orchestrator | Monday 09 March 2026 00:51:21 +0000 (0:00:00.443) 0:05:31.537 ********** 2026-03-09 00:57:34.473059 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.473066 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.473077 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.473086 | orchestrator | 2026-03-09 00:57:34.473093 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-09 00:57:34.473107 | orchestrator | Monday 09 March 2026 00:51:22 +0000 (0:00:00.519) 0:05:32.057 ********** 2026-03-09 00:57:34.473115 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.473122 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.473129 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.473135 | orchestrator | 2026-03-09 00:57:34.473143 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-09 00:57:34.473150 | orchestrator | Monday 09 March 2026 00:51:23 +0000 (0:00:00.702) 0:05:32.760 ********** 2026-03-09 00:57:34.473157 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.473164 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.473172 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.473179 | orchestrator | 2026-03-09 00:57:34.473186 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-09 00:57:34.473194 | orchestrator | Monday 09 March 2026 00:51:23 +0000 (0:00:00.442) 0:05:33.202 ********** 2026-03-09 00:57:34.473201 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.473208 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.473215 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.473223 | orchestrator | 2026-03-09 00:57:34.473230 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-09 00:57:34.473237 | orchestrator | Monday 09 March 2026 00:51:23 +0000 (0:00:00.364) 0:05:33.566 ********** 2026-03-09 00:57:34.473244 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.473251 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.473258 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.473265 | orchestrator | 2026-03-09 00:57:34.473272 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-09 00:57:34.473279 | orchestrator | Monday 09 March 2026 00:51:24 +0000 (0:00:00.777) 0:05:34.344 ********** 2026-03-09 00:57:34.473287 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-09 00:57:34.473295 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 00:57:34.473303 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 00:57:34.473310 | orchestrator | 2026-03-09 00:57:34.473318 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-09 00:57:34.473326 | orchestrator | Monday 09 March 2026 00:51:25 +0000 (0:00:00.748) 0:05:35.092 ********** 2026-03-09 00:57:34.473334 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.473342 | orchestrator | 2026-03-09 00:57:34.473350 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-09 00:57:34.473357 | orchestrator | Monday 09 March 2026 00:51:26 +0000 (0:00:00.573) 0:05:35.666 ********** 2026-03-09 00:57:34.473364 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.473371 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.473377 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.473384 | orchestrator | 2026-03-09 00:57:34.473391 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-09 00:57:34.473398 | orchestrator | Monday 09 March 2026 00:51:26 +0000 (0:00:00.721) 0:05:36.387 ********** 2026-03-09 00:57:34.473406 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.473413 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.473421 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.473428 | orchestrator | 2026-03-09 00:57:34.473436 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-09 00:57:34.473444 | orchestrator | Monday 09 March 2026 00:51:27 +0000 (0:00:00.516) 0:05:36.904 ********** 2026-03-09 00:57:34.473451 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 00:57:34.473467 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 00:57:34.473475 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 00:57:34.473483 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-09 00:57:34.473499 | orchestrator | 2026-03-09 00:57:34.473507 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-09 00:57:34.473514 | orchestrator | Monday 09 March 2026 00:51:39 +0000 (0:00:11.980) 0:05:48.885 ********** 2026-03-09 00:57:34.473522 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.473529 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.473534 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.473538 | orchestrator | 2026-03-09 00:57:34.473543 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-09 00:57:34.473548 | orchestrator | Monday 09 March 2026 00:51:39 +0000 (0:00:00.388) 0:05:49.274 ********** 2026-03-09 00:57:34.473552 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-09 00:57:34.473560 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-09 00:57:34.473567 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-09 00:57:34.473575 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-09 00:57:34.473582 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:57:34.473643 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:57:34.473654 | orchestrator | 2026-03-09 00:57:34.473662 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-09 00:57:34.473669 | orchestrator | Monday 09 March 2026 00:51:42 +0000 (0:00:03.020) 0:05:52.294 ********** 2026-03-09 00:57:34.473678 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-09 00:57:34.473685 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-09 00:57:34.473692 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-09 00:57:34.473700 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 00:57:34.473708 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-09 00:57:34.473715 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-09 00:57:34.473723 | orchestrator | 2026-03-09 00:57:34.473730 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-09 00:57:34.473738 | orchestrator | Monday 09 March 2026 00:51:44 +0000 (0:00:01.548) 0:05:53.842 ********** 2026-03-09 00:57:34.473746 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.473753 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.473761 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.473768 | orchestrator | 2026-03-09 00:57:34.473776 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-09 00:57:34.473782 | orchestrator | Monday 09 March 2026 00:51:45 +0000 (0:00:00.929) 0:05:54.772 ********** 2026-03-09 00:57:34.473790 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.473797 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.473804 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.473812 | orchestrator | 2026-03-09 00:57:34.473820 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-09 00:57:34.473827 | orchestrator | Monday 09 March 2026 00:51:45 +0000 (0:00:00.445) 0:05:55.218 ********** 2026-03-09 00:57:34.473835 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.473842 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.473850 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.473857 | orchestrator | 2026-03-09 00:57:34.473883 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-09 00:57:34.473890 | orchestrator | Monday 09 March 2026 00:51:45 +0000 (0:00:00.336) 0:05:55.554 ********** 2026-03-09 00:57:34.473898 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.473906 | orchestrator | 2026-03-09 00:57:34.473913 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-09 00:57:34.473920 | orchestrator | Monday 09 March 2026 00:51:46 +0000 (0:00:00.896) 0:05:56.451 ********** 2026-03-09 00:57:34.473927 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.473932 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.473943 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.473947 | orchestrator | 2026-03-09 00:57:34.473952 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-09 00:57:34.473957 | orchestrator | Monday 09 March 2026 00:51:47 +0000 (0:00:00.355) 0:05:56.807 ********** 2026-03-09 00:57:34.473961 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.473966 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.473970 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.473975 | orchestrator | 2026-03-09 00:57:34.473979 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-09 00:57:34.473984 | orchestrator | Monday 09 March 2026 00:51:47 +0000 (0:00:00.341) 0:05:57.148 ********** 2026-03-09 00:57:34.473989 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.473993 | orchestrator | 2026-03-09 00:57:34.473998 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-09 00:57:34.474002 | orchestrator | Monday 09 March 2026 00:51:48 +0000 (0:00:00.880) 0:05:58.029 ********** 2026-03-09 00:57:34.474007 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.474032 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.474038 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.474042 | orchestrator | 2026-03-09 00:57:34.474047 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-09 00:57:34.474052 | orchestrator | Monday 09 March 2026 00:51:49 +0000 (0:00:01.488) 0:05:59.517 ********** 2026-03-09 00:57:34.474056 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.474061 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.474066 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.474070 | orchestrator | 2026-03-09 00:57:34.474075 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-09 00:57:34.474079 | orchestrator | Monday 09 March 2026 00:51:51 +0000 (0:00:01.318) 0:06:00.836 ********** 2026-03-09 00:57:34.474088 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.474093 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.474098 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.474102 | orchestrator | 2026-03-09 00:57:34.474107 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-09 00:57:34.474111 | orchestrator | Monday 09 March 2026 00:51:53 +0000 (0:00:01.934) 0:06:02.770 ********** 2026-03-09 00:57:34.474116 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.474120 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.474125 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.474130 | orchestrator | 2026-03-09 00:57:34.474134 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-09 00:57:34.474139 | orchestrator | Monday 09 March 2026 00:51:55 +0000 (0:00:01.968) 0:06:04.739 ********** 2026-03-09 00:57:34.474143 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.474148 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.474152 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-09 00:57:34.474159 | orchestrator | 2026-03-09 00:57:34.474167 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-09 00:57:34.474175 | orchestrator | Monday 09 March 2026 00:51:56 +0000 (0:00:00.932) 0:06:05.672 ********** 2026-03-09 00:57:34.474210 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-09 00:57:34.474220 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-09 00:57:34.474227 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-09 00:57:34.474233 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-09 00:57:34.474240 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-03-09 00:57:34.474255 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-09 00:57:34.474263 | orchestrator | 2026-03-09 00:57:34.474270 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-09 00:57:34.474278 | orchestrator | Monday 09 March 2026 00:52:26 +0000 (0:00:30.560) 0:06:36.232 ********** 2026-03-09 00:57:34.474286 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-09 00:57:34.474294 | orchestrator | 2026-03-09 00:57:34.474301 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-09 00:57:34.474309 | orchestrator | Monday 09 March 2026 00:52:28 +0000 (0:00:01.531) 0:06:37.764 ********** 2026-03-09 00:57:34.474316 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.474320 | orchestrator | 2026-03-09 00:57:34.474325 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-09 00:57:34.474329 | orchestrator | Monday 09 March 2026 00:52:28 +0000 (0:00:00.348) 0:06:38.113 ********** 2026-03-09 00:57:34.474334 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.474338 | orchestrator | 2026-03-09 00:57:34.474343 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-09 00:57:34.474348 | orchestrator | Monday 09 March 2026 00:52:28 +0000 (0:00:00.139) 0:06:38.253 ********** 2026-03-09 00:57:34.474352 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-09 00:57:34.474357 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-09 00:57:34.474361 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-09 00:57:34.474366 | orchestrator | 2026-03-09 00:57:34.474370 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-09 00:57:34.474375 | orchestrator | Monday 09 March 2026 00:52:35 +0000 (0:00:06.476) 0:06:44.729 ********** 2026-03-09 00:57:34.474380 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-09 00:57:34.474384 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-09 00:57:34.474389 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-09 00:57:34.474393 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-09 00:57:34.474398 | orchestrator | 2026-03-09 00:57:34.474403 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-09 00:57:34.474407 | orchestrator | Monday 09 March 2026 00:52:40 +0000 (0:00:05.325) 0:06:50.054 ********** 2026-03-09 00:57:34.474412 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.474416 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.474421 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.474425 | orchestrator | 2026-03-09 00:57:34.474430 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-09 00:57:34.474435 | orchestrator | Monday 09 March 2026 00:52:41 +0000 (0:00:00.709) 0:06:50.764 ********** 2026-03-09 00:57:34.474439 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.474444 | orchestrator | 2026-03-09 00:57:34.474448 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-09 00:57:34.474453 | orchestrator | Monday 09 March 2026 00:52:41 +0000 (0:00:00.794) 0:06:51.559 ********** 2026-03-09 00:57:34.474458 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.474462 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.474467 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.474471 | orchestrator | 2026-03-09 00:57:34.474476 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-09 00:57:34.474480 | orchestrator | Monday 09 March 2026 00:52:42 +0000 (0:00:00.341) 0:06:51.901 ********** 2026-03-09 00:57:34.474485 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.474489 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.474498 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.474503 | orchestrator | 2026-03-09 00:57:34.474511 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-09 00:57:34.474516 | orchestrator | Monday 09 March 2026 00:52:43 +0000 (0:00:01.223) 0:06:53.124 ********** 2026-03-09 00:57:34.474520 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-09 00:57:34.474525 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-09 00:57:34.474529 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-09 00:57:34.474534 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.474538 | orchestrator | 2026-03-09 00:57:34.474543 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-09 00:57:34.474548 | orchestrator | Monday 09 March 2026 00:52:44 +0000 (0:00:00.802) 0:06:53.927 ********** 2026-03-09 00:57:34.474552 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.474557 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.474562 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.474566 | orchestrator | 2026-03-09 00:57:34.474571 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-09 00:57:34.474575 | orchestrator | 2026-03-09 00:57:34.474580 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-09 00:57:34.474585 | orchestrator | Monday 09 March 2026 00:52:45 +0000 (0:00:00.915) 0:06:54.842 ********** 2026-03-09 00:57:34.474608 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.474614 | orchestrator | 2026-03-09 00:57:34.474619 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-09 00:57:34.474624 | orchestrator | Monday 09 March 2026 00:52:45 +0000 (0:00:00.576) 0:06:55.419 ********** 2026-03-09 00:57:34.474628 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.474633 | orchestrator | 2026-03-09 00:57:34.474638 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-09 00:57:34.474642 | orchestrator | Monday 09 March 2026 00:52:46 +0000 (0:00:00.761) 0:06:56.180 ********** 2026-03-09 00:57:34.474647 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.474651 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.474656 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.474661 | orchestrator | 2026-03-09 00:57:34.474666 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-09 00:57:34.474670 | orchestrator | Monday 09 March 2026 00:52:46 +0000 (0:00:00.317) 0:06:56.497 ********** 2026-03-09 00:57:34.474675 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.474679 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.474684 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.474688 | orchestrator | 2026-03-09 00:57:34.474693 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-09 00:57:34.474697 | orchestrator | Monday 09 March 2026 00:52:47 +0000 (0:00:00.749) 0:06:57.247 ********** 2026-03-09 00:57:34.474702 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.474707 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.474711 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.474716 | orchestrator | 2026-03-09 00:57:34.474720 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-09 00:57:34.474725 | orchestrator | Monday 09 March 2026 00:52:48 +0000 (0:00:00.675) 0:06:57.923 ********** 2026-03-09 00:57:34.474729 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.474734 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.474739 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.474743 | orchestrator | 2026-03-09 00:57:34.474748 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-09 00:57:34.474752 | orchestrator | Monday 09 March 2026 00:52:49 +0000 (0:00:00.950) 0:06:58.873 ********** 2026-03-09 00:57:34.474757 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.474777 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.474782 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.474786 | orchestrator | 2026-03-09 00:57:34.474791 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-09 00:57:34.474796 | orchestrator | Monday 09 March 2026 00:52:49 +0000 (0:00:00.343) 0:06:59.217 ********** 2026-03-09 00:57:34.474800 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.474805 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.474809 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.474814 | orchestrator | 2026-03-09 00:57:34.474819 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-09 00:57:34.474823 | orchestrator | Monday 09 March 2026 00:52:49 +0000 (0:00:00.350) 0:06:59.567 ********** 2026-03-09 00:57:34.474828 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.474832 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.474837 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.474841 | orchestrator | 2026-03-09 00:57:34.474846 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-09 00:57:34.474851 | orchestrator | Monday 09 March 2026 00:52:50 +0000 (0:00:00.350) 0:06:59.918 ********** 2026-03-09 00:57:34.474855 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.474929 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.474941 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.474949 | orchestrator | 2026-03-09 00:57:34.474957 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-09 00:57:34.474965 | orchestrator | Monday 09 March 2026 00:52:51 +0000 (0:00:01.002) 0:07:00.920 ********** 2026-03-09 00:57:34.474972 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.474980 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.474987 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.474994 | orchestrator | 2026-03-09 00:57:34.475001 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-09 00:57:34.475009 | orchestrator | Monday 09 March 2026 00:52:51 +0000 (0:00:00.725) 0:07:01.646 ********** 2026-03-09 00:57:34.475017 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.475025 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.475032 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.475040 | orchestrator | 2026-03-09 00:57:34.475048 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-09 00:57:34.475061 | orchestrator | Monday 09 March 2026 00:52:52 +0000 (0:00:00.349) 0:07:01.995 ********** 2026-03-09 00:57:34.475070 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.475078 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.475087 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.475094 | orchestrator | 2026-03-09 00:57:34.475103 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-09 00:57:34.475111 | orchestrator | Monday 09 March 2026 00:52:52 +0000 (0:00:00.356) 0:07:02.351 ********** 2026-03-09 00:57:34.475119 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.475128 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.475133 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.475138 | orchestrator | 2026-03-09 00:57:34.475142 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-09 00:57:34.475147 | orchestrator | Monday 09 March 2026 00:52:53 +0000 (0:00:00.671) 0:07:03.023 ********** 2026-03-09 00:57:34.475151 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.475156 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.475160 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.475165 | orchestrator | 2026-03-09 00:57:34.475169 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-09 00:57:34.475174 | orchestrator | Monday 09 March 2026 00:52:53 +0000 (0:00:00.402) 0:07:03.426 ********** 2026-03-09 00:57:34.475178 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.475183 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.475192 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.475203 | orchestrator | 2026-03-09 00:57:34.475207 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-09 00:57:34.475211 | orchestrator | Monday 09 March 2026 00:52:54 +0000 (0:00:00.373) 0:07:03.799 ********** 2026-03-09 00:57:34.475215 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.475219 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.475223 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.475227 | orchestrator | 2026-03-09 00:57:34.475232 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-09 00:57:34.475236 | orchestrator | Monday 09 March 2026 00:52:54 +0000 (0:00:00.505) 0:07:04.304 ********** 2026-03-09 00:57:34.475240 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.475244 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.475248 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.475252 | orchestrator | 2026-03-09 00:57:34.475256 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-09 00:57:34.475260 | orchestrator | Monday 09 March 2026 00:52:55 +0000 (0:00:00.701) 0:07:05.006 ********** 2026-03-09 00:57:34.475264 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.475269 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.475273 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.475277 | orchestrator | 2026-03-09 00:57:34.475281 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-09 00:57:34.475285 | orchestrator | Monday 09 March 2026 00:52:55 +0000 (0:00:00.419) 0:07:05.426 ********** 2026-03-09 00:57:34.475289 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.475293 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.475297 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.475301 | orchestrator | 2026-03-09 00:57:34.475306 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-09 00:57:34.475310 | orchestrator | Monday 09 March 2026 00:52:56 +0000 (0:00:00.425) 0:07:05.851 ********** 2026-03-09 00:57:34.475314 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.475318 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.475322 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.475326 | orchestrator | 2026-03-09 00:57:34.475330 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-09 00:57:34.475334 | orchestrator | Monday 09 March 2026 00:52:56 +0000 (0:00:00.558) 0:07:06.409 ********** 2026-03-09 00:57:34.475338 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.475342 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.475346 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.475351 | orchestrator | 2026-03-09 00:57:34.475355 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-09 00:57:34.475359 | orchestrator | Monday 09 March 2026 00:52:57 +0000 (0:00:00.692) 0:07:07.102 ********** 2026-03-09 00:57:34.475363 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 00:57:34.475367 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 00:57:34.475371 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 00:57:34.475375 | orchestrator | 2026-03-09 00:57:34.475379 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-09 00:57:34.475383 | orchestrator | Monday 09 March 2026 00:52:58 +0000 (0:00:00.785) 0:07:07.887 ********** 2026-03-09 00:57:34.475387 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.475392 | orchestrator | 2026-03-09 00:57:34.475396 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-09 00:57:34.475400 | orchestrator | Monday 09 March 2026 00:52:58 +0000 (0:00:00.687) 0:07:08.575 ********** 2026-03-09 00:57:34.475404 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.475408 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.475412 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.475421 | orchestrator | 2026-03-09 00:57:34.475426 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-09 00:57:34.475430 | orchestrator | Monday 09 March 2026 00:52:59 +0000 (0:00:00.618) 0:07:09.193 ********** 2026-03-09 00:57:34.475434 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.475438 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.475442 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.475446 | orchestrator | 2026-03-09 00:57:34.475450 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-09 00:57:34.475454 | orchestrator | Monday 09 March 2026 00:52:59 +0000 (0:00:00.377) 0:07:09.571 ********** 2026-03-09 00:57:34.475458 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.475462 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.475467 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.475471 | orchestrator | 2026-03-09 00:57:34.475478 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-09 00:57:34.475482 | orchestrator | Monday 09 March 2026 00:53:00 +0000 (0:00:00.715) 0:07:10.286 ********** 2026-03-09 00:57:34.475487 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.475491 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.475495 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.475499 | orchestrator | 2026-03-09 00:57:34.475503 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-09 00:57:34.475507 | orchestrator | Monday 09 March 2026 00:53:01 +0000 (0:00:00.454) 0:07:10.741 ********** 2026-03-09 00:57:34.475511 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-09 00:57:34.475516 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-09 00:57:34.475520 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-09 00:57:34.475524 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-09 00:57:34.475528 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-09 00:57:34.475540 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-09 00:57:34.475544 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-09 00:57:34.475548 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-09 00:57:34.475552 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-09 00:57:34.475556 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-09 00:57:34.475560 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-09 00:57:34.475564 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-09 00:57:34.475569 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-09 00:57:34.475573 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-09 00:57:34.475577 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-09 00:57:34.475581 | orchestrator | 2026-03-09 00:57:34.475585 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-09 00:57:34.475589 | orchestrator | Monday 09 March 2026 00:53:04 +0000 (0:00:03.680) 0:07:14.422 ********** 2026-03-09 00:57:34.475593 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.475597 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.475602 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.475606 | orchestrator | 2026-03-09 00:57:34.475610 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-09 00:57:34.475614 | orchestrator | Monday 09 March 2026 00:53:05 +0000 (0:00:00.396) 0:07:14.818 ********** 2026-03-09 00:57:34.475626 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.475630 | orchestrator | 2026-03-09 00:57:34.475635 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-09 00:57:34.475639 | orchestrator | Monday 09 March 2026 00:53:05 +0000 (0:00:00.583) 0:07:15.401 ********** 2026-03-09 00:57:34.475643 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-09 00:57:34.475647 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-09 00:57:34.475651 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-09 00:57:34.475655 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-09 00:57:34.475660 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-09 00:57:34.475664 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-09 00:57:34.475668 | orchestrator | 2026-03-09 00:57:34.475672 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-09 00:57:34.475676 | orchestrator | Monday 09 March 2026 00:53:07 +0000 (0:00:01.347) 0:07:16.749 ********** 2026-03-09 00:57:34.475681 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:57:34.475688 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-09 00:57:34.475695 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-09 00:57:34.475701 | orchestrator | 2026-03-09 00:57:34.475708 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-09 00:57:34.475714 | orchestrator | Monday 09 March 2026 00:53:09 +0000 (0:00:02.413) 0:07:19.163 ********** 2026-03-09 00:57:34.475720 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-09 00:57:34.475726 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-09 00:57:34.475733 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.475739 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-09 00:57:34.475745 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-09 00:57:34.475752 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.475759 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-09 00:57:34.475766 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-09 00:57:34.475772 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.475779 | orchestrator | 2026-03-09 00:57:34.475787 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-09 00:57:34.475791 | orchestrator | Monday 09 March 2026 00:53:10 +0000 (0:00:01.371) 0:07:20.534 ********** 2026-03-09 00:57:34.475798 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-09 00:57:34.475803 | orchestrator | 2026-03-09 00:57:34.475807 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-09 00:57:34.475811 | orchestrator | Monday 09 March 2026 00:53:13 +0000 (0:00:02.450) 0:07:22.985 ********** 2026-03-09 00:57:34.475816 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.475820 | orchestrator | 2026-03-09 00:57:34.475824 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-09 00:57:34.475828 | orchestrator | Monday 09 March 2026 00:53:13 +0000 (0:00:00.514) 0:07:23.499 ********** 2026-03-09 00:57:34.475833 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5d8e344b-ecd1-5c90-b783-cb125ac7004a', 'data_vg': 'ceph-5d8e344b-ecd1-5c90-b783-cb125ac7004a'}) 2026-03-09 00:57:34.475839 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5d9cda85-a301-5b16-a7fe-308b162b7259', 'data_vg': 'ceph-5d9cda85-a301-5b16-a7fe-308b162b7259'}) 2026-03-09 00:57:34.475847 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-deb603ca-2db3-5399-8e8d-1e0d01641e0c', 'data_vg': 'ceph-deb603ca-2db3-5399-8e8d-1e0d01641e0c'}) 2026-03-09 00:57:34.475856 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d6be2487-d224-518f-9009-30806e6fa587', 'data_vg': 'ceph-d6be2487-d224-518f-9009-30806e6fa587'}) 2026-03-09 00:57:34.475879 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8734b320-4ffe-530d-8e73-0aec819257b4', 'data_vg': 'ceph-8734b320-4ffe-530d-8e73-0aec819257b4'}) 2026-03-09 00:57:34.475884 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c1f67558-6290-50a7-9c09-ea5e74fb08ab', 'data_vg': 'ceph-c1f67558-6290-50a7-9c09-ea5e74fb08ab'}) 2026-03-09 00:57:34.475888 | orchestrator | 2026-03-09 00:57:34.475892 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-09 00:57:34.475896 | orchestrator | Monday 09 March 2026 00:53:58 +0000 (0:00:44.213) 0:08:07.713 ********** 2026-03-09 00:57:34.475901 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.475905 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.475909 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.475913 | orchestrator | 2026-03-09 00:57:34.475917 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-09 00:57:34.475921 | orchestrator | Monday 09 March 2026 00:53:58 +0000 (0:00:00.339) 0:08:08.053 ********** 2026-03-09 00:57:34.475925 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.475929 | orchestrator | 2026-03-09 00:57:34.475934 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-09 00:57:34.475938 | orchestrator | Monday 09 March 2026 00:53:58 +0000 (0:00:00.581) 0:08:08.634 ********** 2026-03-09 00:57:34.475942 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.475946 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.475950 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.475954 | orchestrator | 2026-03-09 00:57:34.475958 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-09 00:57:34.475963 | orchestrator | Monday 09 March 2026 00:54:00 +0000 (0:00:01.037) 0:08:09.671 ********** 2026-03-09 00:57:34.475967 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.475971 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.475975 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.475979 | orchestrator | 2026-03-09 00:57:34.475983 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-09 00:57:34.475987 | orchestrator | Monday 09 March 2026 00:54:02 +0000 (0:00:02.819) 0:08:12.491 ********** 2026-03-09 00:57:34.475991 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.475996 | orchestrator | 2026-03-09 00:57:34.476000 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-09 00:57:34.476004 | orchestrator | Monday 09 March 2026 00:54:03 +0000 (0:00:00.594) 0:08:13.086 ********** 2026-03-09 00:57:34.476008 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.476012 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.476016 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.476020 | orchestrator | 2026-03-09 00:57:34.476025 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-09 00:57:34.476029 | orchestrator | Monday 09 March 2026 00:54:05 +0000 (0:00:01.606) 0:08:14.692 ********** 2026-03-09 00:57:34.476033 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.476037 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.476041 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.476045 | orchestrator | 2026-03-09 00:57:34.476049 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-09 00:57:34.476054 | orchestrator | Monday 09 March 2026 00:54:06 +0000 (0:00:01.198) 0:08:15.891 ********** 2026-03-09 00:57:34.476058 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.476062 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.476066 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.476070 | orchestrator | 2026-03-09 00:57:34.476074 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-09 00:57:34.476082 | orchestrator | Monday 09 March 2026 00:54:08 +0000 (0:00:01.866) 0:08:17.757 ********** 2026-03-09 00:57:34.476087 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.476091 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.476095 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.476099 | orchestrator | 2026-03-09 00:57:34.476103 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-09 00:57:34.476107 | orchestrator | Monday 09 March 2026 00:54:08 +0000 (0:00:00.358) 0:08:18.116 ********** 2026-03-09 00:57:34.476114 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.476119 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.476123 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.476127 | orchestrator | 2026-03-09 00:57:34.476131 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-09 00:57:34.476135 | orchestrator | Monday 09 March 2026 00:54:09 +0000 (0:00:00.672) 0:08:18.788 ********** 2026-03-09 00:57:34.476139 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-09 00:57:34.476143 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-03-09 00:57:34.476147 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-03-09 00:57:34.476151 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-03-09 00:57:34.476156 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-03-09 00:57:34.476160 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-03-09 00:57:34.476164 | orchestrator | 2026-03-09 00:57:34.476168 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-09 00:57:34.476172 | orchestrator | Monday 09 March 2026 00:54:10 +0000 (0:00:01.053) 0:08:19.842 ********** 2026-03-09 00:57:34.476176 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-09 00:57:34.476180 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-09 00:57:34.476185 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-03-09 00:57:34.476189 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-09 00:57:34.476193 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-09 00:57:34.476201 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-03-09 00:57:34.476205 | orchestrator | 2026-03-09 00:57:34.476209 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-09 00:57:34.476214 | orchestrator | Monday 09 March 2026 00:54:12 +0000 (0:00:02.361) 0:08:22.203 ********** 2026-03-09 00:57:34.476218 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-03-09 00:57:34.476222 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-09 00:57:34.476226 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-09 00:57:34.476230 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-09 00:57:34.476234 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-03-09 00:57:34.476239 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-09 00:57:34.476243 | orchestrator | 2026-03-09 00:57:34.476247 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-09 00:57:34.476251 | orchestrator | Monday 09 March 2026 00:54:17 +0000 (0:00:04.555) 0:08:26.759 ********** 2026-03-09 00:57:34.476255 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.476259 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.476263 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-09 00:57:34.476268 | orchestrator | 2026-03-09 00:57:34.476272 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-09 00:57:34.476278 | orchestrator | Monday 09 March 2026 00:54:20 +0000 (0:00:03.629) 0:08:30.389 ********** 2026-03-09 00:57:34.476284 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.476291 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.476297 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-09 00:57:34.476304 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-09 00:57:34.476310 | orchestrator | 2026-03-09 00:57:34.476316 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-09 00:57:34.476328 | orchestrator | Monday 09 March 2026 00:54:33 +0000 (0:00:12.719) 0:08:43.109 ********** 2026-03-09 00:57:34.476335 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.476341 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.476349 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.476356 | orchestrator | 2026-03-09 00:57:34.476362 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-09 00:57:34.476370 | orchestrator | Monday 09 March 2026 00:54:34 +0000 (0:00:01.208) 0:08:44.317 ********** 2026-03-09 00:57:34.476375 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.476379 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.476383 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.476387 | orchestrator | 2026-03-09 00:57:34.476391 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-09 00:57:34.476396 | orchestrator | Monday 09 March 2026 00:54:35 +0000 (0:00:00.394) 0:08:44.711 ********** 2026-03-09 00:57:34.476400 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.476404 | orchestrator | 2026-03-09 00:57:34.476408 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-09 00:57:34.476412 | orchestrator | Monday 09 March 2026 00:54:35 +0000 (0:00:00.619) 0:08:45.331 ********** 2026-03-09 00:57:34.476417 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:57:34.476421 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:57:34.476425 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:57:34.476429 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.476434 | orchestrator | 2026-03-09 00:57:34.476438 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-09 00:57:34.476442 | orchestrator | Monday 09 March 2026 00:54:36 +0000 (0:00:00.991) 0:08:46.322 ********** 2026-03-09 00:57:34.476446 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.476450 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.476454 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.476459 | orchestrator | 2026-03-09 00:57:34.476463 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-09 00:57:34.476467 | orchestrator | Monday 09 March 2026 00:54:37 +0000 (0:00:00.398) 0:08:46.721 ********** 2026-03-09 00:57:34.476471 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.476475 | orchestrator | 2026-03-09 00:57:34.476479 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-09 00:57:34.476484 | orchestrator | Monday 09 March 2026 00:54:37 +0000 (0:00:00.254) 0:08:46.975 ********** 2026-03-09 00:57:34.476488 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.476495 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.476499 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.476503 | orchestrator | 2026-03-09 00:57:34.476508 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-09 00:57:34.476512 | orchestrator | Monday 09 March 2026 00:54:37 +0000 (0:00:00.321) 0:08:47.296 ********** 2026-03-09 00:57:34.476516 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.476520 | orchestrator | 2026-03-09 00:57:34.476524 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-09 00:57:34.476528 | orchestrator | Monday 09 March 2026 00:54:37 +0000 (0:00:00.237) 0:08:47.534 ********** 2026-03-09 00:57:34.476533 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.476537 | orchestrator | 2026-03-09 00:57:34.476541 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-09 00:57:34.476545 | orchestrator | Monday 09 March 2026 00:54:38 +0000 (0:00:00.262) 0:08:47.796 ********** 2026-03-09 00:57:34.476549 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.476553 | orchestrator | 2026-03-09 00:57:34.476558 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-09 00:57:34.476566 | orchestrator | Monday 09 March 2026 00:54:38 +0000 (0:00:00.126) 0:08:47.923 ********** 2026-03-09 00:57:34.476570 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.476574 | orchestrator | 2026-03-09 00:57:34.476582 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-09 00:57:34.476587 | orchestrator | Monday 09 March 2026 00:54:38 +0000 (0:00:00.253) 0:08:48.176 ********** 2026-03-09 00:57:34.476591 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.476595 | orchestrator | 2026-03-09 00:57:34.476599 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-09 00:57:34.476603 | orchestrator | Monday 09 March 2026 00:54:39 +0000 (0:00:00.802) 0:08:48.978 ********** 2026-03-09 00:57:34.476608 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:57:34.476612 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:57:34.476616 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:57:34.476620 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.476624 | orchestrator | 2026-03-09 00:57:34.476628 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-09 00:57:34.476632 | orchestrator | Monday 09 March 2026 00:54:39 +0000 (0:00:00.485) 0:08:49.464 ********** 2026-03-09 00:57:34.476636 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.476641 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.476645 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.476649 | orchestrator | 2026-03-09 00:57:34.476653 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-09 00:57:34.476657 | orchestrator | Monday 09 March 2026 00:54:40 +0000 (0:00:00.368) 0:08:49.832 ********** 2026-03-09 00:57:34.476661 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.476665 | orchestrator | 2026-03-09 00:57:34.476669 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-09 00:57:34.476674 | orchestrator | Monday 09 March 2026 00:54:40 +0000 (0:00:00.281) 0:08:50.114 ********** 2026-03-09 00:57:34.476678 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.476682 | orchestrator | 2026-03-09 00:57:34.476686 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-09 00:57:34.476690 | orchestrator | 2026-03-09 00:57:34.476694 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-09 00:57:34.476698 | orchestrator | Monday 09 March 2026 00:54:41 +0000 (0:00:00.709) 0:08:50.824 ********** 2026-03-09 00:57:34.476703 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.476708 | orchestrator | 2026-03-09 00:57:34.476713 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-09 00:57:34.476717 | orchestrator | Monday 09 March 2026 00:54:42 +0000 (0:00:01.386) 0:08:52.210 ********** 2026-03-09 00:57:34.476721 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.476726 | orchestrator | 2026-03-09 00:57:34.476730 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-09 00:57:34.476734 | orchestrator | Monday 09 March 2026 00:54:43 +0000 (0:00:01.311) 0:08:53.522 ********** 2026-03-09 00:57:34.476738 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.476742 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.476746 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.476750 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.476754 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.476758 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.476763 | orchestrator | 2026-03-09 00:57:34.476767 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-09 00:57:34.476771 | orchestrator | Monday 09 March 2026 00:54:45 +0000 (0:00:01.287) 0:08:54.809 ********** 2026-03-09 00:57:34.476778 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.476783 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.476787 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.476791 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.476795 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.476799 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.476804 | orchestrator | 2026-03-09 00:57:34.476811 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-09 00:57:34.476818 | orchestrator | Monday 09 March 2026 00:54:45 +0000 (0:00:00.753) 0:08:55.562 ********** 2026-03-09 00:57:34.476825 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.476832 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.476838 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.476844 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.476851 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.476871 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.476878 | orchestrator | 2026-03-09 00:57:34.476884 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-09 00:57:34.476895 | orchestrator | Monday 09 March 2026 00:54:46 +0000 (0:00:01.063) 0:08:56.626 ********** 2026-03-09 00:57:34.476899 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.476905 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.476912 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.476918 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.476925 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.476931 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.476937 | orchestrator | 2026-03-09 00:57:34.476944 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-09 00:57:34.476951 | orchestrator | Monday 09 March 2026 00:54:47 +0000 (0:00:00.765) 0:08:57.391 ********** 2026-03-09 00:57:34.476958 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.476965 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.476972 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.476978 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.476985 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.476990 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.476994 | orchestrator | 2026-03-09 00:57:34.476998 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-09 00:57:34.477002 | orchestrator | Monday 09 March 2026 00:54:49 +0000 (0:00:01.338) 0:08:58.730 ********** 2026-03-09 00:57:34.477006 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.477010 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.477018 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.477022 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.477027 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.477031 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.477035 | orchestrator | 2026-03-09 00:57:34.477039 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-09 00:57:34.477043 | orchestrator | Monday 09 March 2026 00:54:49 +0000 (0:00:00.725) 0:08:59.456 ********** 2026-03-09 00:57:34.477047 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.477052 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.477056 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.477060 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.477064 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.477068 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.477072 | orchestrator | 2026-03-09 00:57:34.477076 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-09 00:57:34.477080 | orchestrator | Monday 09 March 2026 00:54:50 +0000 (0:00:00.931) 0:09:00.387 ********** 2026-03-09 00:57:34.477084 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.477089 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.477093 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.477101 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.477105 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.477109 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.477113 | orchestrator | 2026-03-09 00:57:34.477118 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-09 00:57:34.477122 | orchestrator | Monday 09 March 2026 00:54:51 +0000 (0:00:01.074) 0:09:01.461 ********** 2026-03-09 00:57:34.477126 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.477130 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.477134 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.477138 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.477142 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.477146 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.477150 | orchestrator | 2026-03-09 00:57:34.477154 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-09 00:57:34.477158 | orchestrator | Monday 09 March 2026 00:54:53 +0000 (0:00:01.406) 0:09:02.867 ********** 2026-03-09 00:57:34.477162 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.477167 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.477171 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.477175 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.477179 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.477183 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.477187 | orchestrator | 2026-03-09 00:57:34.477191 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-09 00:57:34.477195 | orchestrator | Monday 09 March 2026 00:54:53 +0000 (0:00:00.591) 0:09:03.459 ********** 2026-03-09 00:57:34.477199 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.477203 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.477207 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.477212 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.477216 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.477220 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.477224 | orchestrator | 2026-03-09 00:57:34.477228 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-09 00:57:34.477232 | orchestrator | Monday 09 March 2026 00:54:54 +0000 (0:00:00.939) 0:09:04.399 ********** 2026-03-09 00:57:34.477236 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.477241 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.477245 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.477249 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.477253 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.477257 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.477261 | orchestrator | 2026-03-09 00:57:34.477265 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-09 00:57:34.477269 | orchestrator | Monday 09 March 2026 00:54:55 +0000 (0:00:00.672) 0:09:05.071 ********** 2026-03-09 00:57:34.477273 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.477278 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.477282 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.477286 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.477290 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.477294 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.477298 | orchestrator | 2026-03-09 00:57:34.477302 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-09 00:57:34.477306 | orchestrator | Monday 09 March 2026 00:54:56 +0000 (0:00:00.900) 0:09:05.971 ********** 2026-03-09 00:57:34.477310 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.477314 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.477319 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.477323 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.477327 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.477331 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.477335 | orchestrator | 2026-03-09 00:57:34.477342 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-09 00:57:34.477350 | orchestrator | Monday 09 March 2026 00:54:57 +0000 (0:00:00.677) 0:09:06.648 ********** 2026-03-09 00:57:34.477355 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.477359 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.477363 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.477367 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.477371 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.477375 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.477379 | orchestrator | 2026-03-09 00:57:34.477383 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-09 00:57:34.477395 | orchestrator | Monday 09 March 2026 00:54:57 +0000 (0:00:00.932) 0:09:07.581 ********** 2026-03-09 00:57:34.477400 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.477404 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.477408 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.477412 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:57:34.477416 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:57:34.477420 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:57:34.477424 | orchestrator | 2026-03-09 00:57:34.477429 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-09 00:57:34.477433 | orchestrator | Monday 09 March 2026 00:54:58 +0000 (0:00:00.612) 0:09:08.193 ********** 2026-03-09 00:57:34.477437 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.477444 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.477448 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.477452 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.477457 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.477461 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.477465 | orchestrator | 2026-03-09 00:57:34.477469 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-09 00:57:34.477473 | orchestrator | Monday 09 March 2026 00:54:59 +0000 (0:00:00.906) 0:09:09.100 ********** 2026-03-09 00:57:34.477477 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.477481 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.477486 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.477490 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.477494 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.477498 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.477502 | orchestrator | 2026-03-09 00:57:34.477506 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-09 00:57:34.477510 | orchestrator | Monday 09 March 2026 00:55:00 +0000 (0:00:00.625) 0:09:09.726 ********** 2026-03-09 00:57:34.477514 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.477518 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.477522 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.477527 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.477531 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.477535 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.477539 | orchestrator | 2026-03-09 00:57:34.477543 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-09 00:57:34.477547 | orchestrator | Monday 09 March 2026 00:55:01 +0000 (0:00:01.443) 0:09:11.169 ********** 2026-03-09 00:57:34.477551 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-09 00:57:34.477555 | orchestrator | 2026-03-09 00:57:34.477560 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-09 00:57:34.477564 | orchestrator | Monday 09 March 2026 00:55:06 +0000 (0:00:04.888) 0:09:16.057 ********** 2026-03-09 00:57:34.477568 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-09 00:57:34.477572 | orchestrator | 2026-03-09 00:57:34.477576 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-09 00:57:34.477580 | orchestrator | Monday 09 March 2026 00:55:08 +0000 (0:00:02.123) 0:09:18.181 ********** 2026-03-09 00:57:34.477585 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.477593 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.477597 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.477601 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.477605 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.477609 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.477613 | orchestrator | 2026-03-09 00:57:34.477618 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-09 00:57:34.477622 | orchestrator | Monday 09 March 2026 00:55:10 +0000 (0:00:01.935) 0:09:20.117 ********** 2026-03-09 00:57:34.477626 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.477630 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.477634 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.477638 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.477642 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.477646 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.477651 | orchestrator | 2026-03-09 00:57:34.477655 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-09 00:57:34.477659 | orchestrator | Monday 09 March 2026 00:55:11 +0000 (0:00:00.980) 0:09:21.097 ********** 2026-03-09 00:57:34.477663 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.477669 | orchestrator | 2026-03-09 00:57:34.477673 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-09 00:57:34.477677 | orchestrator | Monday 09 March 2026 00:55:12 +0000 (0:00:01.320) 0:09:22.418 ********** 2026-03-09 00:57:34.477681 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.477685 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.477689 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.477693 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.477697 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.477702 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.477706 | orchestrator | 2026-03-09 00:57:34.477710 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-09 00:57:34.477714 | orchestrator | Monday 09 March 2026 00:55:14 +0000 (0:00:01.888) 0:09:24.306 ********** 2026-03-09 00:57:34.477718 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.477722 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.477726 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.477730 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.477734 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.477741 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.477745 | orchestrator | 2026-03-09 00:57:34.477749 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-09 00:57:34.477754 | orchestrator | Monday 09 March 2026 00:55:18 +0000 (0:00:03.608) 0:09:27.914 ********** 2026-03-09 00:57:34.477758 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:57:34.477763 | orchestrator | 2026-03-09 00:57:34.477767 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-09 00:57:34.477771 | orchestrator | Monday 09 March 2026 00:55:19 +0000 (0:00:01.596) 0:09:29.511 ********** 2026-03-09 00:57:34.477775 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.477779 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.477783 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.477788 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.477792 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.477796 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.477800 | orchestrator | 2026-03-09 00:57:34.477804 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-09 00:57:34.477808 | orchestrator | Monday 09 March 2026 00:55:20 +0000 (0:00:00.931) 0:09:30.443 ********** 2026-03-09 00:57:34.477812 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.477824 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.477829 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.477833 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:57:34.477837 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:57:34.477841 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:57:34.477845 | orchestrator | 2026-03-09 00:57:34.477850 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-09 00:57:34.477854 | orchestrator | Monday 09 March 2026 00:55:23 +0000 (0:00:02.556) 0:09:32.999 ********** 2026-03-09 00:57:34.477892 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.477897 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.477902 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.477906 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:57:34.477910 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:57:34.477914 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:57:34.477918 | orchestrator | 2026-03-09 00:57:34.477922 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-09 00:57:34.477926 | orchestrator | 2026-03-09 00:57:34.477931 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-09 00:57:34.477935 | orchestrator | Monday 09 March 2026 00:55:24 +0000 (0:00:01.243) 0:09:34.242 ********** 2026-03-09 00:57:34.477939 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.477943 | orchestrator | 2026-03-09 00:57:34.477947 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-09 00:57:34.477951 | orchestrator | Monday 09 March 2026 00:55:25 +0000 (0:00:00.565) 0:09:34.808 ********** 2026-03-09 00:57:34.477956 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.477960 | orchestrator | 2026-03-09 00:57:34.477964 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-09 00:57:34.477968 | orchestrator | Monday 09 March 2026 00:55:26 +0000 (0:00:00.887) 0:09:35.695 ********** 2026-03-09 00:57:34.477972 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.477976 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.477981 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.477985 | orchestrator | 2026-03-09 00:57:34.477989 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-09 00:57:34.477993 | orchestrator | Monday 09 March 2026 00:55:26 +0000 (0:00:00.354) 0:09:36.050 ********** 2026-03-09 00:57:34.477997 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.478001 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.478006 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.478010 | orchestrator | 2026-03-09 00:57:34.478054 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-09 00:57:34.478062 | orchestrator | Monday 09 March 2026 00:55:27 +0000 (0:00:00.694) 0:09:36.744 ********** 2026-03-09 00:57:34.478068 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.478075 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.478081 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.478087 | orchestrator | 2026-03-09 00:57:34.478093 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-09 00:57:34.478100 | orchestrator | Monday 09 March 2026 00:55:28 +0000 (0:00:01.092) 0:09:37.837 ********** 2026-03-09 00:57:34.478106 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.478111 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.478115 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.478119 | orchestrator | 2026-03-09 00:57:34.478123 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-09 00:57:34.478127 | orchestrator | Monday 09 March 2026 00:55:28 +0000 (0:00:00.739) 0:09:38.577 ********** 2026-03-09 00:57:34.478132 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.478136 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.478140 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.478149 | orchestrator | 2026-03-09 00:57:34.478153 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-09 00:57:34.478157 | orchestrator | Monday 09 March 2026 00:55:29 +0000 (0:00:00.307) 0:09:38.884 ********** 2026-03-09 00:57:34.478161 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.478165 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.478170 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.478174 | orchestrator | 2026-03-09 00:57:34.478178 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-09 00:57:34.478182 | orchestrator | Monday 09 March 2026 00:55:29 +0000 (0:00:00.324) 0:09:39.209 ********** 2026-03-09 00:57:34.478186 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.478191 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.478197 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.478208 | orchestrator | 2026-03-09 00:57:34.478219 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-09 00:57:34.478226 | orchestrator | Monday 09 March 2026 00:55:30 +0000 (0:00:00.622) 0:09:39.831 ********** 2026-03-09 00:57:34.478232 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.478239 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.478246 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.478253 | orchestrator | 2026-03-09 00:57:34.478258 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-09 00:57:34.478263 | orchestrator | Monday 09 March 2026 00:55:31 +0000 (0:00:00.847) 0:09:40.679 ********** 2026-03-09 00:57:34.478267 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.478271 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.478275 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.478279 | orchestrator | 2026-03-09 00:57:34.478283 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-09 00:57:34.478287 | orchestrator | Monday 09 March 2026 00:55:31 +0000 (0:00:00.872) 0:09:41.551 ********** 2026-03-09 00:57:34.478292 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.478297 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.478304 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.478310 | orchestrator | 2026-03-09 00:57:34.478317 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-09 00:57:34.478324 | orchestrator | Monday 09 March 2026 00:55:32 +0000 (0:00:00.340) 0:09:41.892 ********** 2026-03-09 00:57:34.478331 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.478344 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.478349 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.478353 | orchestrator | 2026-03-09 00:57:34.478357 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-09 00:57:34.478361 | orchestrator | Monday 09 March 2026 00:55:32 +0000 (0:00:00.619) 0:09:42.511 ********** 2026-03-09 00:57:34.478365 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.478369 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.478374 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.478378 | orchestrator | 2026-03-09 00:57:34.478382 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-09 00:57:34.478386 | orchestrator | Monday 09 March 2026 00:55:33 +0000 (0:00:00.376) 0:09:42.888 ********** 2026-03-09 00:57:34.478390 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.478394 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.478399 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.478406 | orchestrator | 2026-03-09 00:57:34.478412 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-09 00:57:34.478419 | orchestrator | Monday 09 March 2026 00:55:33 +0000 (0:00:00.373) 0:09:43.261 ********** 2026-03-09 00:57:34.478425 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.478432 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.478438 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.478444 | orchestrator | 2026-03-09 00:57:34.478450 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-09 00:57:34.478462 | orchestrator | Monday 09 March 2026 00:55:33 +0000 (0:00:00.340) 0:09:43.602 ********** 2026-03-09 00:57:34.478468 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.478474 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.478480 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.478484 | orchestrator | 2026-03-09 00:57:34.478488 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-09 00:57:34.478491 | orchestrator | Monday 09 March 2026 00:55:34 +0000 (0:00:00.503) 0:09:44.105 ********** 2026-03-09 00:57:34.478495 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.478499 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.478503 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.478507 | orchestrator | 2026-03-09 00:57:34.478510 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-09 00:57:34.478514 | orchestrator | Monday 09 March 2026 00:55:34 +0000 (0:00:00.270) 0:09:44.375 ********** 2026-03-09 00:57:34.478518 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.478522 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.478527 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.478533 | orchestrator | 2026-03-09 00:57:34.478539 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-09 00:57:34.478545 | orchestrator | Monday 09 March 2026 00:55:35 +0000 (0:00:00.274) 0:09:44.650 ********** 2026-03-09 00:57:34.478552 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.478558 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.478564 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.478571 | orchestrator | 2026-03-09 00:57:34.478577 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-09 00:57:34.478583 | orchestrator | Monday 09 March 2026 00:55:35 +0000 (0:00:00.358) 0:09:45.008 ********** 2026-03-09 00:57:34.478590 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.478596 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.478602 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.478608 | orchestrator | 2026-03-09 00:57:34.478614 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-09 00:57:34.478621 | orchestrator | Monday 09 March 2026 00:55:36 +0000 (0:00:00.772) 0:09:45.781 ********** 2026-03-09 00:57:34.478626 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.478632 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.478639 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-09 00:57:34.478645 | orchestrator | 2026-03-09 00:57:34.478652 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-09 00:57:34.478656 | orchestrator | Monday 09 March 2026 00:55:36 +0000 (0:00:00.408) 0:09:46.189 ********** 2026-03-09 00:57:34.478660 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-09 00:57:34.478664 | orchestrator | 2026-03-09 00:57:34.478667 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-09 00:57:34.478671 | orchestrator | Monday 09 March 2026 00:55:38 +0000 (0:00:02.191) 0:09:48.380 ********** 2026-03-09 00:57:34.478676 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-09 00:57:34.478685 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.478689 | orchestrator | 2026-03-09 00:57:34.478693 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-09 00:57:34.478696 | orchestrator | Monday 09 March 2026 00:55:38 +0000 (0:00:00.201) 0:09:48.581 ********** 2026-03-09 00:57:34.478702 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-09 00:57:34.478715 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-09 00:57:34.478720 | orchestrator | 2026-03-09 00:57:34.478723 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-09 00:57:34.478727 | orchestrator | Monday 09 March 2026 00:55:47 +0000 (0:00:09.016) 0:09:57.598 ********** 2026-03-09 00:57:34.478734 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-09 00:57:34.478738 | orchestrator | 2026-03-09 00:57:34.478742 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-09 00:57:34.478746 | orchestrator | Monday 09 March 2026 00:55:51 +0000 (0:00:03.836) 0:10:01.435 ********** 2026-03-09 00:57:34.478750 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.478754 | orchestrator | 2026-03-09 00:57:34.478758 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-09 00:57:34.478761 | orchestrator | Monday 09 March 2026 00:55:52 +0000 (0:00:00.591) 0:10:02.027 ********** 2026-03-09 00:57:34.478765 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-09 00:57:34.478769 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-09 00:57:34.478773 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-09 00:57:34.478776 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-09 00:57:34.478780 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-09 00:57:34.478784 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-09 00:57:34.478788 | orchestrator | 2026-03-09 00:57:34.478792 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-09 00:57:34.478796 | orchestrator | Monday 09 March 2026 00:55:53 +0000 (0:00:01.075) 0:10:03.103 ********** 2026-03-09 00:57:34.478802 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:57:34.478808 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-09 00:57:34.478814 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-09 00:57:34.478821 | orchestrator | 2026-03-09 00:57:34.478827 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-09 00:57:34.478833 | orchestrator | Monday 09 March 2026 00:55:55 +0000 (0:00:02.417) 0:10:05.520 ********** 2026-03-09 00:57:34.478839 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-09 00:57:34.478843 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-09 00:57:34.478847 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.478851 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-09 00:57:34.478855 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-09 00:57:34.478876 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.478883 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-09 00:57:34.478889 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-09 00:57:34.478895 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.478901 | orchestrator | 2026-03-09 00:57:34.478908 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-09 00:57:34.478915 | orchestrator | Monday 09 March 2026 00:55:57 +0000 (0:00:01.621) 0:10:07.141 ********** 2026-03-09 00:57:34.478921 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.478927 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.478933 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.478940 | orchestrator | 2026-03-09 00:57:34.478945 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-09 00:57:34.478949 | orchestrator | Monday 09 March 2026 00:56:00 +0000 (0:00:02.755) 0:10:09.897 ********** 2026-03-09 00:57:34.478957 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.478961 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.478964 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.478968 | orchestrator | 2026-03-09 00:57:34.478972 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-09 00:57:34.478976 | orchestrator | Monday 09 March 2026 00:56:00 +0000 (0:00:00.502) 0:10:10.399 ********** 2026-03-09 00:57:34.478980 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.478984 | orchestrator | 2026-03-09 00:57:34.478987 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-09 00:57:34.478991 | orchestrator | Monday 09 March 2026 00:56:02 +0000 (0:00:01.319) 0:10:11.718 ********** 2026-03-09 00:57:34.478995 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.478999 | orchestrator | 2026-03-09 00:57:34.479003 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-09 00:57:34.479010 | orchestrator | Monday 09 March 2026 00:56:02 +0000 (0:00:00.631) 0:10:12.350 ********** 2026-03-09 00:57:34.479014 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.479020 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.479026 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.479032 | orchestrator | 2026-03-09 00:57:34.479038 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-09 00:57:34.479044 | orchestrator | Monday 09 March 2026 00:56:04 +0000 (0:00:01.434) 0:10:13.785 ********** 2026-03-09 00:57:34.479051 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.479057 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.479063 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.479069 | orchestrator | 2026-03-09 00:57:34.479076 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-09 00:57:34.479082 | orchestrator | Monday 09 March 2026 00:56:05 +0000 (0:00:01.783) 0:10:15.568 ********** 2026-03-09 00:57:34.479088 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.479095 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.479101 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.479106 | orchestrator | 2026-03-09 00:57:34.479112 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-09 00:57:34.479118 | orchestrator | Monday 09 March 2026 00:56:07 +0000 (0:00:01.947) 0:10:17.515 ********** 2026-03-09 00:57:34.479124 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.479133 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.479139 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.479144 | orchestrator | 2026-03-09 00:57:34.479150 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-09 00:57:34.479156 | orchestrator | Monday 09 March 2026 00:56:10 +0000 (0:00:02.289) 0:10:19.805 ********** 2026-03-09 00:57:34.479162 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.479169 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.479176 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.479182 | orchestrator | 2026-03-09 00:57:34.479188 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-09 00:57:34.479195 | orchestrator | Monday 09 March 2026 00:56:12 +0000 (0:00:02.248) 0:10:22.054 ********** 2026-03-09 00:57:34.479201 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.479207 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.479213 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.479219 | orchestrator | 2026-03-09 00:57:34.479225 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-09 00:57:34.479232 | orchestrator | Monday 09 March 2026 00:56:13 +0000 (0:00:00.669) 0:10:22.723 ********** 2026-03-09 00:57:34.479238 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.479249 | orchestrator | 2026-03-09 00:57:34.479254 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-09 00:57:34.479257 | orchestrator | Monday 09 March 2026 00:56:14 +0000 (0:00:01.035) 0:10:23.759 ********** 2026-03-09 00:57:34.479261 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.479265 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.479269 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.479272 | orchestrator | 2026-03-09 00:57:34.479276 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-09 00:57:34.479280 | orchestrator | Monday 09 March 2026 00:56:14 +0000 (0:00:00.408) 0:10:24.167 ********** 2026-03-09 00:57:34.479284 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.479287 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.479291 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.479295 | orchestrator | 2026-03-09 00:57:34.479299 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-09 00:57:34.479302 | orchestrator | Monday 09 March 2026 00:56:15 +0000 (0:00:01.256) 0:10:25.423 ********** 2026-03-09 00:57:34.479306 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:57:34.479310 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:57:34.479314 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:57:34.479318 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.479321 | orchestrator | 2026-03-09 00:57:34.479325 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-09 00:57:34.479329 | orchestrator | Monday 09 March 2026 00:56:16 +0000 (0:00:00.982) 0:10:26.405 ********** 2026-03-09 00:57:34.479333 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.479336 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.479340 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.479344 | orchestrator | 2026-03-09 00:57:34.479348 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-09 00:57:34.479351 | orchestrator | 2026-03-09 00:57:34.479355 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-09 00:57:34.479359 | orchestrator | Monday 09 March 2026 00:56:17 +0000 (0:00:01.157) 0:10:27.563 ********** 2026-03-09 00:57:34.479363 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.479366 | orchestrator | 2026-03-09 00:57:34.479370 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-09 00:57:34.479374 | orchestrator | Monday 09 March 2026 00:56:18 +0000 (0:00:00.556) 0:10:28.119 ********** 2026-03-09 00:57:34.479378 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.479381 | orchestrator | 2026-03-09 00:57:34.479385 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-09 00:57:34.479389 | orchestrator | Monday 09 March 2026 00:56:19 +0000 (0:00:00.857) 0:10:28.977 ********** 2026-03-09 00:57:34.479393 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.479396 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.479400 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.479404 | orchestrator | 2026-03-09 00:57:34.479408 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-09 00:57:34.479412 | orchestrator | Monday 09 March 2026 00:56:19 +0000 (0:00:00.343) 0:10:29.320 ********** 2026-03-09 00:57:34.479419 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.479423 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.479426 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.479430 | orchestrator | 2026-03-09 00:57:34.479434 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-09 00:57:34.479438 | orchestrator | Monday 09 March 2026 00:56:20 +0000 (0:00:00.712) 0:10:30.033 ********** 2026-03-09 00:57:34.479441 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.479445 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.479452 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.479456 | orchestrator | 2026-03-09 00:57:34.479460 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-09 00:57:34.479463 | orchestrator | Monday 09 March 2026 00:56:21 +0000 (0:00:00.759) 0:10:30.792 ********** 2026-03-09 00:57:34.479467 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.479471 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.479475 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.479478 | orchestrator | 2026-03-09 00:57:34.479482 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-09 00:57:34.479486 | orchestrator | Monday 09 March 2026 00:56:22 +0000 (0:00:01.078) 0:10:31.871 ********** 2026-03-09 00:57:34.479490 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.479493 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.479497 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.479501 | orchestrator | 2026-03-09 00:57:34.479508 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-09 00:57:34.479512 | orchestrator | Monday 09 March 2026 00:56:22 +0000 (0:00:00.338) 0:10:32.209 ********** 2026-03-09 00:57:34.479516 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.479519 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.479523 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.479527 | orchestrator | 2026-03-09 00:57:34.479531 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-09 00:57:34.479534 | orchestrator | Monday 09 March 2026 00:56:22 +0000 (0:00:00.342) 0:10:32.551 ********** 2026-03-09 00:57:34.479538 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.479542 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.479546 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.479550 | orchestrator | 2026-03-09 00:57:34.479553 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-09 00:57:34.479557 | orchestrator | Monday 09 March 2026 00:56:23 +0000 (0:00:00.328) 0:10:32.880 ********** 2026-03-09 00:57:34.479561 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.479565 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.479568 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.479572 | orchestrator | 2026-03-09 00:57:34.479576 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-09 00:57:34.479580 | orchestrator | Monday 09 March 2026 00:56:24 +0000 (0:00:01.108) 0:10:33.988 ********** 2026-03-09 00:57:34.479583 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.479587 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.479591 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.479594 | orchestrator | 2026-03-09 00:57:34.479598 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-09 00:57:34.479602 | orchestrator | Monday 09 March 2026 00:56:25 +0000 (0:00:00.738) 0:10:34.726 ********** 2026-03-09 00:57:34.479606 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.479610 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.479613 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.479617 | orchestrator | 2026-03-09 00:57:34.479621 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-09 00:57:34.479625 | orchestrator | Monday 09 March 2026 00:56:25 +0000 (0:00:00.315) 0:10:35.042 ********** 2026-03-09 00:57:34.479629 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.479632 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.479636 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.479640 | orchestrator | 2026-03-09 00:57:34.479644 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-09 00:57:34.479647 | orchestrator | Monday 09 March 2026 00:56:25 +0000 (0:00:00.346) 0:10:35.388 ********** 2026-03-09 00:57:34.479651 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.479655 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.479659 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.479662 | orchestrator | 2026-03-09 00:57:34.479669 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-09 00:57:34.479673 | orchestrator | Monday 09 March 2026 00:56:26 +0000 (0:00:00.562) 0:10:35.950 ********** 2026-03-09 00:57:34.479676 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.479681 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.479687 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.479693 | orchestrator | 2026-03-09 00:57:34.479699 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-09 00:57:34.479705 | orchestrator | Monday 09 March 2026 00:56:26 +0000 (0:00:00.312) 0:10:36.263 ********** 2026-03-09 00:57:34.479711 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.479717 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.479723 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.479729 | orchestrator | 2026-03-09 00:57:34.479735 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-09 00:57:34.479739 | orchestrator | Monday 09 March 2026 00:56:26 +0000 (0:00:00.288) 0:10:36.551 ********** 2026-03-09 00:57:34.479743 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.479746 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.479750 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.479754 | orchestrator | 2026-03-09 00:57:34.479758 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-09 00:57:34.479761 | orchestrator | Monday 09 March 2026 00:56:27 +0000 (0:00:00.286) 0:10:36.838 ********** 2026-03-09 00:57:34.479765 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.479769 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.479773 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.479776 | orchestrator | 2026-03-09 00:57:34.479780 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-09 00:57:34.479784 | orchestrator | Monday 09 March 2026 00:56:27 +0000 (0:00:00.503) 0:10:37.341 ********** 2026-03-09 00:57:34.479788 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.479792 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.479798 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.479802 | orchestrator | 2026-03-09 00:57:34.479805 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-09 00:57:34.479809 | orchestrator | Monday 09 March 2026 00:56:27 +0000 (0:00:00.287) 0:10:37.629 ********** 2026-03-09 00:57:34.479813 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.479817 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.479820 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.479824 | orchestrator | 2026-03-09 00:57:34.479828 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-09 00:57:34.479832 | orchestrator | Monday 09 March 2026 00:56:28 +0000 (0:00:00.376) 0:10:38.005 ********** 2026-03-09 00:57:34.479835 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.479839 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.479843 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.479846 | orchestrator | 2026-03-09 00:57:34.479850 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-09 00:57:34.479854 | orchestrator | Monday 09 March 2026 00:56:29 +0000 (0:00:00.689) 0:10:38.695 ********** 2026-03-09 00:57:34.479871 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.479880 | orchestrator | 2026-03-09 00:57:34.479884 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-09 00:57:34.479891 | orchestrator | Monday 09 March 2026 00:56:29 +0000 (0:00:00.620) 0:10:39.316 ********** 2026-03-09 00:57:34.479895 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:57:34.479898 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-09 00:57:34.479902 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-09 00:57:34.479906 | orchestrator | 2026-03-09 00:57:34.479910 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-09 00:57:34.479967 | orchestrator | Monday 09 March 2026 00:56:32 +0000 (0:00:02.380) 0:10:41.696 ********** 2026-03-09 00:57:34.479971 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-09 00:57:34.479975 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-09 00:57:34.479979 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.479982 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-09 00:57:34.479986 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-09 00:57:34.479990 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-09 00:57:34.479994 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.479998 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-09 00:57:34.480001 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.480005 | orchestrator | 2026-03-09 00:57:34.480009 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-09 00:57:34.480013 | orchestrator | Monday 09 March 2026 00:56:33 +0000 (0:00:01.393) 0:10:43.090 ********** 2026-03-09 00:57:34.480017 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.480020 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.480024 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.480028 | orchestrator | 2026-03-09 00:57:34.480031 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-09 00:57:34.480035 | orchestrator | Monday 09 March 2026 00:56:33 +0000 (0:00:00.296) 0:10:43.387 ********** 2026-03-09 00:57:34.480039 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.480043 | orchestrator | 2026-03-09 00:57:34.480047 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-09 00:57:34.480050 | orchestrator | Monday 09 March 2026 00:56:34 +0000 (0:00:00.506) 0:10:43.894 ********** 2026-03-09 00:57:34.480056 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-09 00:57:34.480063 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-09 00:57:34.480069 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-09 00:57:34.480075 | orchestrator | 2026-03-09 00:57:34.480081 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-09 00:57:34.480086 | orchestrator | Monday 09 March 2026 00:56:35 +0000 (0:00:01.284) 0:10:45.178 ********** 2026-03-09 00:57:34.480092 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:57:34.480098 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-09 00:57:34.480104 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:57:34.480110 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-09 00:57:34.480115 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:57:34.480121 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-09 00:57:34.480126 | orchestrator | 2026-03-09 00:57:34.480132 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-09 00:57:34.480138 | orchestrator | Monday 09 March 2026 00:56:40 +0000 (0:00:04.841) 0:10:50.020 ********** 2026-03-09 00:57:34.480144 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:57:34.480155 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-09 00:57:34.480169 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:57:34.480176 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-09 00:57:34.480181 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:57:34.480187 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-09 00:57:34.480193 | orchestrator | 2026-03-09 00:57:34.480199 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-09 00:57:34.480206 | orchestrator | Monday 09 March 2026 00:56:43 +0000 (0:00:02.645) 0:10:52.665 ********** 2026-03-09 00:57:34.480212 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-09 00:57:34.480219 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.480223 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-09 00:57:34.480227 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-09 00:57:34.480234 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.480240 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.480246 | orchestrator | 2026-03-09 00:57:34.480252 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-09 00:57:34.480263 | orchestrator | Monday 09 March 2026 00:56:44 +0000 (0:00:01.279) 0:10:53.944 ********** 2026-03-09 00:57:34.480270 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-09 00:57:34.480276 | orchestrator | 2026-03-09 00:57:34.480283 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-09 00:57:34.480289 | orchestrator | Monday 09 March 2026 00:56:44 +0000 (0:00:00.249) 0:10:54.194 ********** 2026-03-09 00:57:34.480295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 00:57:34.480301 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 00:57:34.480307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 00:57:34.480313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 00:57:34.480320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 00:57:34.480326 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.480332 | orchestrator | 2026-03-09 00:57:34.480338 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-09 00:57:34.480344 | orchestrator | Monday 09 March 2026 00:56:45 +0000 (0:00:01.160) 0:10:55.355 ********** 2026-03-09 00:57:34.480350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 00:57:34.480357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 00:57:34.480364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 00:57:34.480370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 00:57:34.480376 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-09 00:57:34.480382 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.480389 | orchestrator | 2026-03-09 00:57:34.480395 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-09 00:57:34.480401 | orchestrator | Monday 09 March 2026 00:56:46 +0000 (0:00:00.629) 0:10:55.984 ********** 2026-03-09 00:57:34.480407 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-09 00:57:34.480418 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-09 00:57:34.480424 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-09 00:57:34.480430 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-09 00:57:34.480436 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-09 00:57:34.480442 | orchestrator | 2026-03-09 00:57:34.480448 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-09 00:57:34.480453 | orchestrator | Monday 09 March 2026 00:57:17 +0000 (0:00:31.552) 0:11:27.536 ********** 2026-03-09 00:57:34.480459 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.480465 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.480474 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.480481 | orchestrator | 2026-03-09 00:57:34.480487 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-09 00:57:34.480494 | orchestrator | Monday 09 March 2026 00:57:18 +0000 (0:00:00.374) 0:11:27.911 ********** 2026-03-09 00:57:34.480499 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.480505 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.480511 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.480517 | orchestrator | 2026-03-09 00:57:34.480523 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-09 00:57:34.480529 | orchestrator | Monday 09 March 2026 00:57:18 +0000 (0:00:00.348) 0:11:28.259 ********** 2026-03-09 00:57:34.480537 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.480541 | orchestrator | 2026-03-09 00:57:34.480545 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-09 00:57:34.480549 | orchestrator | Monday 09 March 2026 00:57:19 +0000 (0:00:00.841) 0:11:29.100 ********** 2026-03-09 00:57:34.480552 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.480556 | orchestrator | 2026-03-09 00:57:34.480564 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-09 00:57:34.480568 | orchestrator | Monday 09 March 2026 00:57:20 +0000 (0:00:00.581) 0:11:29.682 ********** 2026-03-09 00:57:34.480572 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.480576 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.480579 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.480583 | orchestrator | 2026-03-09 00:57:34.480587 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-09 00:57:34.480591 | orchestrator | Monday 09 March 2026 00:57:21 +0000 (0:00:01.349) 0:11:31.032 ********** 2026-03-09 00:57:34.480595 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.480598 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.480602 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.480606 | orchestrator | 2026-03-09 00:57:34.480610 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-09 00:57:34.480613 | orchestrator | Monday 09 March 2026 00:57:22 +0000 (0:00:01.569) 0:11:32.601 ********** 2026-03-09 00:57:34.480617 | orchestrator | changed: [testbed-node-4] 2026-03-09 00:57:34.480621 | orchestrator | changed: [testbed-node-3] 2026-03-09 00:57:34.480625 | orchestrator | changed: [testbed-node-5] 2026-03-09 00:57:34.480628 | orchestrator | 2026-03-09 00:57:34.480632 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-09 00:57:34.480641 | orchestrator | Monday 09 March 2026 00:57:24 +0000 (0:00:01.979) 0:11:34.581 ********** 2026-03-09 00:57:34.480645 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-09 00:57:34.480648 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-09 00:57:34.480652 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-09 00:57:34.480656 | orchestrator | 2026-03-09 00:57:34.480660 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-09 00:57:34.480664 | orchestrator | Monday 09 March 2026 00:57:27 +0000 (0:00:02.961) 0:11:37.543 ********** 2026-03-09 00:57:34.480667 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.480671 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.480675 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.480679 | orchestrator | 2026-03-09 00:57:34.480683 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-09 00:57:34.480686 | orchestrator | Monday 09 March 2026 00:57:28 +0000 (0:00:00.449) 0:11:37.992 ********** 2026-03-09 00:57:34.480690 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:57:34.480694 | orchestrator | 2026-03-09 00:57:34.480698 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-09 00:57:34.480702 | orchestrator | Monday 09 March 2026 00:57:28 +0000 (0:00:00.551) 0:11:38.543 ********** 2026-03-09 00:57:34.480705 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.480709 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.480713 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.480717 | orchestrator | 2026-03-09 00:57:34.480721 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-09 00:57:34.480724 | orchestrator | Monday 09 March 2026 00:57:29 +0000 (0:00:00.721) 0:11:39.265 ********** 2026-03-09 00:57:34.480728 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.480732 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:57:34.480736 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:57:34.480740 | orchestrator | 2026-03-09 00:57:34.480743 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-09 00:57:34.480747 | orchestrator | Monday 09 March 2026 00:57:30 +0000 (0:00:00.508) 0:11:39.773 ********** 2026-03-09 00:57:34.480751 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:57:34.480755 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:57:34.480759 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:57:34.480763 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:57:34.480766 | orchestrator | 2026-03-09 00:57:34.480770 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-09 00:57:34.480774 | orchestrator | Monday 09 March 2026 00:57:30 +0000 (0:00:00.713) 0:11:40.486 ********** 2026-03-09 00:57:34.480778 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:57:34.480781 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:57:34.480785 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:57:34.480789 | orchestrator | 2026-03-09 00:57:34.480793 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:57:34.480800 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-09 00:57:34.480805 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-09 00:57:34.480809 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-09 00:57:34.480816 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-09 00:57:34.480820 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-09 00:57:34.480826 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-09 00:57:34.480830 | orchestrator | 2026-03-09 00:57:34.480834 | orchestrator | 2026-03-09 00:57:34.480838 | orchestrator | 2026-03-09 00:57:34.480845 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:57:34.480849 | orchestrator | Monday 09 March 2026 00:57:31 +0000 (0:00:00.317) 0:11:40.804 ********** 2026-03-09 00:57:34.480852 | orchestrator | =============================================================================== 2026-03-09 00:57:34.480856 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 47.57s 2026-03-09 00:57:34.480898 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 44.21s 2026-03-09 00:57:34.480902 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.55s 2026-03-09 00:57:34.480906 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.56s 2026-03-09 00:57:34.480910 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.32s 2026-03-09 00:57:34.480914 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.63s 2026-03-09 00:57:34.480918 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.72s 2026-03-09 00:57:34.480922 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.98s 2026-03-09 00:57:34.480925 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 11.08s 2026-03-09 00:57:34.480929 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 9.02s 2026-03-09 00:57:34.480933 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.91s 2026-03-09 00:57:34.480937 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.48s 2026-03-09 00:57:34.480941 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.33s 2026-03-09 00:57:34.480945 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.89s 2026-03-09 00:57:34.480948 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.84s 2026-03-09 00:57:34.480952 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.56s 2026-03-09 00:57:34.480956 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.84s 2026-03-09 00:57:34.480961 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 3.71s 2026-03-09 00:57:34.480967 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.68s 2026-03-09 00:57:34.480973 | orchestrator | ceph-osd : Unset noup flag ---------------------------------------------- 3.63s 2026-03-09 00:57:34.480979 | orchestrator | 2026-03-09 00:57:34 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:57:34.480985 | orchestrator | 2026-03-09 00:57:34 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:57:34.480992 | orchestrator | 2026-03-09 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:37.508169 | orchestrator | 2026-03-09 00:57:37 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:57:37.511058 | orchestrator | 2026-03-09 00:57:37 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:57:37.511165 | orchestrator | 2026-03-09 00:57:37 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:57:37.511273 | orchestrator | 2026-03-09 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:40.561566 | orchestrator | 2026-03-09 00:57:40 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:57:40.564044 | orchestrator | 2026-03-09 00:57:40 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:57:40.566744 | orchestrator | 2026-03-09 00:57:40 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:57:40.566789 | orchestrator | 2026-03-09 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:43.617307 | orchestrator | 2026-03-09 00:57:43 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:57:43.620827 | orchestrator | 2026-03-09 00:57:43 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:57:43.623743 | orchestrator | 2026-03-09 00:57:43 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:57:43.623822 | orchestrator | 2026-03-09 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:46.671974 | orchestrator | 2026-03-09 00:57:46 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:57:46.673560 | orchestrator | 2026-03-09 00:57:46 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:57:46.675804 | orchestrator | 2026-03-09 00:57:46 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:57:46.676154 | orchestrator | 2026-03-09 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:49.716290 | orchestrator | 2026-03-09 00:57:49 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:57:49.719726 | orchestrator | 2026-03-09 00:57:49 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:57:49.721909 | orchestrator | 2026-03-09 00:57:49 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:57:49.722128 | orchestrator | 2026-03-09 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:52.774300 | orchestrator | 2026-03-09 00:57:52 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:57:52.774399 | orchestrator | 2026-03-09 00:57:52 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:57:52.775885 | orchestrator | 2026-03-09 00:57:52 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:57:52.775967 | orchestrator | 2026-03-09 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:55.815802 | orchestrator | 2026-03-09 00:57:55 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:57:55.816429 | orchestrator | 2026-03-09 00:57:55 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:57:55.818123 | orchestrator | 2026-03-09 00:57:55 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:57:55.818155 | orchestrator | 2026-03-09 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:57:58.858106 | orchestrator | 2026-03-09 00:57:58 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:57:58.860957 | orchestrator | 2026-03-09 00:57:58 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:57:58.863268 | orchestrator | 2026-03-09 00:57:58 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:57:58.863417 | orchestrator | 2026-03-09 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:01.918254 | orchestrator | 2026-03-09 00:58:01 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:58:01.918936 | orchestrator | 2026-03-09 00:58:01 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:58:01.921772 | orchestrator | 2026-03-09 00:58:01 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:58:01.921815 | orchestrator | 2026-03-09 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:04.958260 | orchestrator | 2026-03-09 00:58:04 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:58:04.960245 | orchestrator | 2026-03-09 00:58:04 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:58:04.963937 | orchestrator | 2026-03-09 00:58:04 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:58:04.964031 | orchestrator | 2026-03-09 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:08.025404 | orchestrator | 2026-03-09 00:58:08 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:58:08.026599 | orchestrator | 2026-03-09 00:58:08 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:58:08.028391 | orchestrator | 2026-03-09 00:58:08 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:58:08.028440 | orchestrator | 2026-03-09 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:11.081869 | orchestrator | 2026-03-09 00:58:11 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:58:11.085063 | orchestrator | 2026-03-09 00:58:11 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:58:11.087518 | orchestrator | 2026-03-09 00:58:11 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:58:11.087678 | orchestrator | 2026-03-09 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:14.137933 | orchestrator | 2026-03-09 00:58:14 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:58:14.140235 | orchestrator | 2026-03-09 00:58:14 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:58:14.144556 | orchestrator | 2026-03-09 00:58:14 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:58:14.144641 | orchestrator | 2026-03-09 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:17.201033 | orchestrator | 2026-03-09 00:58:17 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:58:17.203990 | orchestrator | 2026-03-09 00:58:17 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:58:17.205755 | orchestrator | 2026-03-09 00:58:17 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:58:17.206251 | orchestrator | 2026-03-09 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:20.254962 | orchestrator | 2026-03-09 00:58:20 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:58:20.256464 | orchestrator | 2026-03-09 00:58:20 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:58:20.259146 | orchestrator | 2026-03-09 00:58:20 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:58:20.259195 | orchestrator | 2026-03-09 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:23.309631 | orchestrator | 2026-03-09 00:58:23 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:58:23.312666 | orchestrator | 2026-03-09 00:58:23 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:58:23.317153 | orchestrator | 2026-03-09 00:58:23 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:58:23.317197 | orchestrator | 2026-03-09 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:26.371254 | orchestrator | 2026-03-09 00:58:26 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:58:26.375492 | orchestrator | 2026-03-09 00:58:26 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:58:26.377897 | orchestrator | 2026-03-09 00:58:26 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:58:26.377944 | orchestrator | 2026-03-09 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:29.434966 | orchestrator | 2026-03-09 00:58:29 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:58:29.438993 | orchestrator | 2026-03-09 00:58:29 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:58:29.441476 | orchestrator | 2026-03-09 00:58:29 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:58:29.441533 | orchestrator | 2026-03-09 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:32.491931 | orchestrator | 2026-03-09 00:58:32 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:58:32.492560 | orchestrator | 2026-03-09 00:58:32 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:58:32.494423 | orchestrator | 2026-03-09 00:58:32 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:58:32.494479 | orchestrator | 2026-03-09 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:35.541988 | orchestrator | 2026-03-09 00:58:35 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:58:35.542092 | orchestrator | 2026-03-09 00:58:35 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:58:35.542603 | orchestrator | 2026-03-09 00:58:35 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:58:35.543590 | orchestrator | 2026-03-09 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:38.596317 | orchestrator | 2026-03-09 00:58:38 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state STARTED 2026-03-09 00:58:38.598312 | orchestrator | 2026-03-09 00:58:38 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:58:38.600039 | orchestrator | 2026-03-09 00:58:38 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:58:38.600107 | orchestrator | 2026-03-09 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:41.644123 | orchestrator | 2026-03-09 00:58:41.644240 | orchestrator | 2026-03-09 00:58:41 | INFO  | Task f645106b-c781-49cf-bd32-f7a407860a63 is in state SUCCESS 2026-03-09 00:58:41.645606 | orchestrator | 2026-03-09 00:58:41.645643 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:58:41.645655 | orchestrator | 2026-03-09 00:58:41.645815 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 00:58:41.645828 | orchestrator | Monday 09 March 2026 00:55:38 +0000 (0:00:00.267) 0:00:00.267 ********** 2026-03-09 00:58:41.645838 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:58:41.645849 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:58:41.645859 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:58:41.645890 | orchestrator | 2026-03-09 00:58:41.645901 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 00:58:41.645911 | orchestrator | Monday 09 March 2026 00:55:38 +0000 (0:00:00.306) 0:00:00.574 ********** 2026-03-09 00:58:41.645921 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-09 00:58:41.645930 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-09 00:58:41.645940 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-09 00:58:41.645949 | orchestrator | 2026-03-09 00:58:41.645959 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-09 00:58:41.645969 | orchestrator | 2026-03-09 00:58:41.645978 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-09 00:58:41.645988 | orchestrator | Monday 09 March 2026 00:55:38 +0000 (0:00:00.416) 0:00:00.991 ********** 2026-03-09 00:58:41.645998 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:58:41.646007 | orchestrator | 2026-03-09 00:58:41.646061 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-09 00:58:41.646072 | orchestrator | Monday 09 March 2026 00:55:39 +0000 (0:00:00.506) 0:00:01.498 ********** 2026-03-09 00:58:41.646081 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-09 00:58:41.646091 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-09 00:58:41.646100 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-09 00:58:41.646110 | orchestrator | 2026-03-09 00:58:41.646119 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-09 00:58:41.646129 | orchestrator | Monday 09 March 2026 00:55:39 +0000 (0:00:00.671) 0:00:02.169 ********** 2026-03-09 00:58:41.646142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 00:58:41.646156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 00:58:41.646189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 00:58:41.646216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 00:58:41.646238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 00:58:41.646258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 00:58:41.646276 | orchestrator | 2026-03-09 00:58:41.646293 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-09 00:58:41.646309 | orchestrator | Monday 09 March 2026 00:55:41 +0000 (0:00:01.634) 0:00:03.804 ********** 2026-03-09 00:58:41.646327 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:58:41.646344 | orchestrator | 2026-03-09 00:58:41.646379 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-09 00:58:41.646396 | orchestrator | Monday 09 March 2026 00:55:42 +0000 (0:00:00.506) 0:00:04.311 ********** 2026-03-09 00:58:41.646427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 00:58:41.646449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 00:58:41.646471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 00:58:41.646492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 00:58:41.646527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 00:58:41.646561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 00:58:41.646582 | orchestrator | 2026-03-09 00:58:41.646601 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-09 00:58:41.646618 | orchestrator | Monday 09 March 2026 00:55:44 +0000 (0:00:02.546) 0:00:06.857 ********** 2026-03-09 00:58:41.646640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-09 00:58:41.646659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-09 00:58:41.646686 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:58:41.646715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-09 00:58:41.646746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-09 00:58:41.646766 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:58:41.646833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-09 00:58:41.646856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-09 00:58:41.646886 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:58:41.646905 | orchestrator | 2026-03-09 00:58:41.646922 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-09 00:58:41.646938 | orchestrator | Monday 09 March 2026 00:55:46 +0000 (0:00:01.464) 0:00:08.322 ********** 2026-03-09 00:58:41.647111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-09 00:58:41.647154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-09 00:58:41.647174 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:58:41.647193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-09 00:58:41.647211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-09 00:58:41.647241 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:58:41.647263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-09 00:58:41.647294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-09 00:58:41.647433 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:58:41.647454 | orchestrator | 2026-03-09 00:58:41.647471 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-09 00:58:41.647487 | orchestrator | Monday 09 March 2026 00:55:47 +0000 (0:00:01.166) 0:00:09.488 ********** 2026-03-09 00:58:41.647598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 00:58:41.647634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 00:58:41.647665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 00:58:41.647705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 00:58:41.647726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 00:58:41.647746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 00:58:41.647773 | orchestrator | 2026-03-09 00:58:41.647814 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-09 00:58:41.647833 | orchestrator | Monday 09 March 2026 00:55:49 +0000 (0:00:02.642) 0:00:12.130 ********** 2026-03-09 00:58:41.647929 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:58:41.647956 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:58:41.647974 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:58:41.647991 | orchestrator | 2026-03-09 00:58:41.648006 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-09 00:58:41.648023 | orchestrator | Monday 09 March 2026 00:55:53 +0000 (0:00:03.298) 0:00:15.429 ********** 2026-03-09 00:58:41.648041 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:58:41.648059 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:58:41.648075 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:58:41.648092 | orchestrator | 2026-03-09 00:58:41.648109 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-09 00:58:41.648127 | orchestrator | Monday 09 March 2026 00:55:54 +0000 (0:00:01.660) 0:00:17.089 ********** 2026-03-09 00:58:41.648155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 00:58:41.648189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 00:58:41.648207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-09 00:58:41.648227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 00:58:41.648268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 00:58:41.648300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-09 00:58:41.648318 | orchestrator | 2026-03-09 00:58:41.648335 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-09 00:58:41.648347 | orchestrator | Monday 09 March 2026 00:55:56 +0000 (0:00:02.022) 0:00:19.111 ********** 2026-03-09 00:58:41.648357 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:58:41.648367 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:58:41.648377 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:58:41.648386 | orchestrator | 2026-03-09 00:58:41.648396 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-09 00:58:41.648405 | orchestrator | Monday 09 March 2026 00:55:57 +0000 (0:00:00.275) 0:00:19.387 ********** 2026-03-09 00:58:41.648415 | orchestrator | 2026-03-09 00:58:41.648424 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-09 00:58:41.648434 | orchestrator | Monday 09 March 2026 00:55:57 +0000 (0:00:00.061) 0:00:19.448 ********** 2026-03-09 00:58:41.648444 | orchestrator | 2026-03-09 00:58:41.648453 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-09 00:58:41.648470 | orchestrator | Monday 09 March 2026 00:55:57 +0000 (0:00:00.062) 0:00:19.511 ********** 2026-03-09 00:58:41.648479 | orchestrator | 2026-03-09 00:58:41.648489 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-09 00:58:41.648498 | orchestrator | Monday 09 March 2026 00:55:57 +0000 (0:00:00.062) 0:00:19.573 ********** 2026-03-09 00:58:41.648508 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:58:41.648518 | orchestrator | 2026-03-09 00:58:41.648534 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-09 00:58:41.648551 | orchestrator | Monday 09 March 2026 00:55:58 +0000 (0:00:00.707) 0:00:20.280 ********** 2026-03-09 00:58:41.648567 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:58:41.648582 | orchestrator | 2026-03-09 00:58:41.648598 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-09 00:58:41.648615 | orchestrator | Monday 09 March 2026 00:55:58 +0000 (0:00:00.240) 0:00:20.521 ********** 2026-03-09 00:58:41.648630 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:58:41.648647 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:58:41.648663 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:58:41.648679 | orchestrator | 2026-03-09 00:58:41.648695 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-09 00:58:41.648712 | orchestrator | Monday 09 March 2026 00:57:03 +0000 (0:01:05.479) 0:01:26.000 ********** 2026-03-09 00:58:41.648727 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:58:41.648743 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:58:41.648759 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:58:41.648775 | orchestrator | 2026-03-09 00:58:41.648853 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-09 00:58:41.648872 | orchestrator | Monday 09 March 2026 00:58:27 +0000 (0:01:23.854) 0:02:49.855 ********** 2026-03-09 00:58:41.648890 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:58:41.648906 | orchestrator | 2026-03-09 00:58:41.648922 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-09 00:58:41.648940 | orchestrator | Monday 09 March 2026 00:58:28 +0000 (0:00:00.728) 0:02:50.584 ********** 2026-03-09 00:58:41.648957 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:58:41.648975 | orchestrator | 2026-03-09 00:58:41.648991 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-03-09 00:58:41.649007 | orchestrator | Monday 09 March 2026 00:58:31 +0000 (0:00:02.718) 0:02:53.302 ********** 2026-03-09 00:58:41.649024 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:58:41.649041 | orchestrator | 2026-03-09 00:58:41.649059 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-09 00:58:41.649076 | orchestrator | Monday 09 March 2026 00:58:33 +0000 (0:00:02.550) 0:02:55.853 ********** 2026-03-09 00:58:41.649094 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:58:41.649113 | orchestrator | 2026-03-09 00:58:41.649129 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-09 00:58:41.649148 | orchestrator | Monday 09 March 2026 00:58:35 +0000 (0:00:02.339) 0:02:58.193 ********** 2026-03-09 00:58:41.649166 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:58:41.649183 | orchestrator | 2026-03-09 00:58:41.649200 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-09 00:58:41.649226 | orchestrator | Monday 09 March 2026 00:58:38 +0000 (0:00:02.602) 0:03:00.795 ********** 2026-03-09 00:58:41.649243 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:58:41.649260 | orchestrator | 2026-03-09 00:58:41.649277 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:58:41.649294 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-09 00:58:41.649312 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-09 00:58:41.649343 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-09 00:58:41.649354 | orchestrator | 2026-03-09 00:58:41.649363 | orchestrator | 2026-03-09 00:58:41.649373 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:58:41.649382 | orchestrator | Monday 09 March 2026 00:58:40 +0000 (0:00:02.230) 0:03:03.026 ********** 2026-03-09 00:58:41.649392 | orchestrator | =============================================================================== 2026-03-09 00:58:41.649401 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 83.85s 2026-03-09 00:58:41.649411 | orchestrator | opensearch : Restart opensearch container ------------------------------ 65.48s 2026-03-09 00:58:41.649420 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.30s 2026-03-09 00:58:41.649430 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.72s 2026-03-09 00:58:41.649440 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.64s 2026-03-09 00:58:41.649449 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.60s 2026-03-09 00:58:41.649459 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.55s 2026-03-09 00:58:41.649468 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.55s 2026-03-09 00:58:41.649477 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.34s 2026-03-09 00:58:41.649487 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.23s 2026-03-09 00:58:41.649496 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.02s 2026-03-09 00:58:41.649506 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.66s 2026-03-09 00:58:41.649516 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.63s 2026-03-09 00:58:41.649525 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.46s 2026-03-09 00:58:41.649535 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.17s 2026-03-09 00:58:41.649544 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.73s 2026-03-09 00:58:41.649554 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.71s 2026-03-09 00:58:41.649563 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.67s 2026-03-09 00:58:41.649573 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2026-03-09 00:58:41.649582 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2026-03-09 00:58:41.649591 | orchestrator | 2026-03-09 00:58:41 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state STARTED 2026-03-09 00:58:41.649601 | orchestrator | 2026-03-09 00:58:41 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:58:41.649611 | orchestrator | 2026-03-09 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:44.689534 | orchestrator | 2026-03-09 00:58:44 | INFO  | Task 12928b58-36d7-4f8d-bc4a-2ea73d023c26 is in state SUCCESS 2026-03-09 00:58:44.690534 | orchestrator | 2026-03-09 00:58:44.690564 | orchestrator | 2026-03-09 00:58:44.690572 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-09 00:58:44.690580 | orchestrator | 2026-03-09 00:58:44.690586 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-09 00:58:44.690594 | orchestrator | Monday 09 March 2026 00:55:38 +0000 (0:00:00.090) 0:00:00.090 ********** 2026-03-09 00:58:44.690600 | orchestrator | ok: [localhost] => { 2026-03-09 00:58:44.690608 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-09 00:58:44.690615 | orchestrator | } 2026-03-09 00:58:44.690622 | orchestrator | 2026-03-09 00:58:44.690646 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-09 00:58:44.690653 | orchestrator | Monday 09 March 2026 00:55:38 +0000 (0:00:00.046) 0:00:00.136 ********** 2026-03-09 00:58:44.690660 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-09 00:58:44.690668 | orchestrator | ...ignoring 2026-03-09 00:58:44.690674 | orchestrator | 2026-03-09 00:58:44.690681 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-09 00:58:44.690687 | orchestrator | Monday 09 March 2026 00:55:41 +0000 (0:00:02.941) 0:00:03.078 ********** 2026-03-09 00:58:44.690694 | orchestrator | skipping: [localhost] 2026-03-09 00:58:44.690700 | orchestrator | 2026-03-09 00:58:44.690706 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-09 00:58:44.690713 | orchestrator | Monday 09 March 2026 00:55:41 +0000 (0:00:00.052) 0:00:03.131 ********** 2026-03-09 00:58:44.690732 | orchestrator | ok: [localhost] 2026-03-09 00:58:44.690738 | orchestrator | 2026-03-09 00:58:44.690744 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 00:58:44.690751 | orchestrator | 2026-03-09 00:58:44.690757 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 00:58:44.690763 | orchestrator | Monday 09 March 2026 00:55:41 +0000 (0:00:00.173) 0:00:03.304 ********** 2026-03-09 00:58:44.690769 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:58:44.690776 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:58:44.690873 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:58:44.690882 | orchestrator | 2026-03-09 00:58:44.690888 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 00:58:44.691152 | orchestrator | Monday 09 March 2026 00:55:41 +0000 (0:00:00.288) 0:00:03.593 ********** 2026-03-09 00:58:44.691168 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-09 00:58:44.691175 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-09 00:58:44.691182 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-09 00:58:44.691188 | orchestrator | 2026-03-09 00:58:44.691194 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-09 00:58:44.691201 | orchestrator | 2026-03-09 00:58:44.691207 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-09 00:58:44.691213 | orchestrator | Monday 09 March 2026 00:55:42 +0000 (0:00:00.477) 0:00:04.071 ********** 2026-03-09 00:58:44.691220 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-09 00:58:44.691226 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-09 00:58:44.691233 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-09 00:58:44.691239 | orchestrator | 2026-03-09 00:58:44.691245 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-09 00:58:44.691251 | orchestrator | Monday 09 March 2026 00:55:42 +0000 (0:00:00.367) 0:00:04.438 ********** 2026-03-09 00:58:44.691258 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:58:44.691265 | orchestrator | 2026-03-09 00:58:44.691271 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-09 00:58:44.691278 | orchestrator | Monday 09 March 2026 00:55:43 +0000 (0:00:00.532) 0:00:04.970 ********** 2026-03-09 00:58:44.691298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 00:58:44.691324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 00:58:44.691332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 00:58:44.691344 | orchestrator | 2026-03-09 00:58:44.691356 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-09 00:58:44.691363 | orchestrator | Monday 09 March 2026 00:55:46 +0000 (0:00:03.171) 0:00:08.142 ********** 2026-03-09 00:58:44.691370 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:58:44.691378 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:58:44.691385 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:58:44.691393 | orchestrator | 2026-03-09 00:58:44.691400 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-09 00:58:44.691407 | orchestrator | Monday 09 March 2026 00:55:47 +0000 (0:00:00.864) 0:00:09.007 ********** 2026-03-09 00:58:44.691415 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:58:44.691422 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:58:44.691429 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:58:44.691436 | orchestrator | 2026-03-09 00:58:44.691444 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-09 00:58:44.691451 | orchestrator | Monday 09 March 2026 00:55:48 +0000 (0:00:01.735) 0:00:10.742 ********** 2026-03-09 00:58:44.691463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 00:58:44.691477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 00:58:44.691494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 00:58:44.691503 | orchestrator | 2026-03-09 00:58:44.691511 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-09 00:58:44.691518 | orchestrator | Monday 09 March 2026 00:55:52 +0000 (0:00:03.791) 0:00:14.533 ********** 2026-03-09 00:58:44.691526 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:58:44.691533 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:58:44.691540 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:58:44.691547 | orchestrator | 2026-03-09 00:58:44.691555 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-09 00:58:44.691562 | orchestrator | Monday 09 March 2026 00:55:53 +0000 (0:00:01.182) 0:00:15.716 ********** 2026-03-09 00:58:44.691574 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:58:44.691581 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:58:44.691589 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:58:44.691596 | orchestrator | 2026-03-09 00:58:44.691603 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-09 00:58:44.691611 | orchestrator | Monday 09 March 2026 00:55:57 +0000 (0:00:03.757) 0:00:19.473 ********** 2026-03-09 00:58:44.691618 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:58:44.691625 | orchestrator | 2026-03-09 00:58:44.691633 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-09 00:58:44.691640 | orchestrator | Monday 09 March 2026 00:55:58 +0000 (0:00:00.647) 0:00:20.120 ********** 2026-03-09 00:58:44.691654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:58:44.691663 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:58:44.691674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:58:44.691687 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:58:44.691700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:58:44.691708 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:58:44.691716 | orchestrator | 2026-03-09 00:58:44.691723 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-09 00:58:44.691730 | orchestrator | Monday 09 March 2026 00:56:02 +0000 (0:00:04.628) 0:00:24.749 ********** 2026-03-09 00:58:44.691742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:58:44.691756 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:58:44.691770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:58:44.691800 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:58:44.691815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:58:44.691834 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:58:44.691843 | orchestrator | 2026-03-09 00:58:44.691851 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-09 00:58:44.691862 | orchestrator | Monday 09 March 2026 00:56:06 +0000 (0:00:03.769) 0:00:28.519 ********** 2026-03-09 00:58:44.691874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:58:44.691894 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:58:44.691921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:58:44.691943 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:58:44.691955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-09 00:58:44.691966 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:58:44.691976 | orchestrator | 2026-03-09 00:58:44.691988 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-09 00:58:44.692000 | orchestrator | Monday 09 March 2026 00:56:09 +0000 (0:00:03.206) 0:00:31.725 ********** 2026-03-09 00:58:44.692025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 00:58:44.692045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 00:58:44.692066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-09 00:58:44.692080 | orchestrator | 2026-03-09 00:58:44.692097 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-09 00:58:44.692109 | orchestrator | Monday 09 March 2026 00:56:13 +0000 (0:00:03.305) 0:00:35.030 ********** 2026-03-09 00:58:44.692130 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:58:44.692137 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:58:44.692145 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:58:44.692152 | orchestrator | 2026-03-09 00:58:44.692159 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-09 00:58:44.692166 | orchestrator | Monday 09 March 2026 00:56:13 +0000 (0:00:00.829) 0:00:35.860 ********** 2026-03-09 00:58:44.692174 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:58:44.692181 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:58:44.692189 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:58:44.692196 | orchestrator | 2026-03-09 00:58:44.692203 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-09 00:58:44.692211 | orchestrator | Monday 09 March 2026 00:56:14 +0000 (0:00:00.693) 0:00:36.554 ********** 2026-03-09 00:58:44.692218 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:58:44.692225 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:58:44.692232 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:58:44.692239 | orchestrator | 2026-03-09 00:58:44.692247 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-09 00:58:44.692254 | orchestrator | Monday 09 March 2026 00:56:15 +0000 (0:00:00.490) 0:00:37.044 ********** 2026-03-09 00:58:44.692262 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-09 00:58:44.692270 | orchestrator | ...ignoring 2026-03-09 00:58:44.692277 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-09 00:58:44.692285 | orchestrator | ...ignoring 2026-03-09 00:58:44.692292 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-09 00:58:44.692299 | orchestrator | ...ignoring 2026-03-09 00:58:44.692306 | orchestrator | 2026-03-09 00:58:44.692314 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-09 00:58:44.692321 | orchestrator | Monday 09 March 2026 00:56:26 +0000 (0:00:11.119) 0:00:48.163 ********** 2026-03-09 00:58:44.692328 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:58:44.692336 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:58:44.692343 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:58:44.692350 | orchestrator | 2026-03-09 00:58:44.692357 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-09 00:58:44.692364 | orchestrator | Monday 09 March 2026 00:56:26 +0000 (0:00:00.383) 0:00:48.547 ********** 2026-03-09 00:58:44.692372 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:58:44.692379 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:58:44.692386 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:58:44.692393 | orchestrator | 2026-03-09 00:58:44.692401 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-09 00:58:44.692408 | orchestrator | Monday 09 March 2026 00:56:27 +0000 (0:00:00.546) 0:00:49.094 ********** 2026-03-09 00:58:44.692415 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:58:44.692422 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:58:44.692430 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:58:44.692437 | orchestrator | 2026-03-09 00:58:44.692444 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-09 00:58:44.692451 | orchestrator | Monday 09 March 2026 00:56:27 +0000 (0:00:00.404) 0:00:49.499 ********** 2026-03-09 00:58:44.692459 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:58:44.692466 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:58:44.692473 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:58:44.692480 | orchestrator | 2026-03-09 00:58:44.692487 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-09 00:58:44.692495 | orchestrator | Monday 09 March 2026 00:56:27 +0000 (0:00:00.377) 0:00:49.876 ********** 2026-03-09 00:58:44.692506 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:58:44.692514 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:58:44.692521 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:58:44.692528 | orchestrator | 2026-03-09 00:58:44.692535 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-09 00:58:44.692543 | orchestrator | Monday 09 March 2026 00:56:28 +0000 (0:00:00.399) 0:00:50.276 ********** 2026-03-09 00:58:44.692554 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:58:44.692562 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:58:44.692569 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:58:44.692577 | orchestrator | 2026-03-09 00:58:44.692584 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-09 00:58:44.692591 | orchestrator | Monday 09 March 2026 00:56:28 +0000 (0:00:00.562) 0:00:50.838 ********** 2026-03-09 00:58:44.692599 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:58:44.692606 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:58:44.692614 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-09 00:58:44.692621 | orchestrator | 2026-03-09 00:58:44.692629 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-09 00:58:44.692636 | orchestrator | Monday 09 March 2026 00:56:29 +0000 (0:00:00.355) 0:00:51.194 ********** 2026-03-09 00:58:44.692643 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:58:44.692650 | orchestrator | 2026-03-09 00:58:44.692658 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-09 00:58:44.692665 | orchestrator | Monday 09 March 2026 00:56:39 +0000 (0:00:10.559) 0:01:01.754 ********** 2026-03-09 00:58:44.692672 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:58:44.692680 | orchestrator | 2026-03-09 00:58:44.692687 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-09 00:58:44.692694 | orchestrator | Monday 09 March 2026 00:56:39 +0000 (0:00:00.132) 0:01:01.886 ********** 2026-03-09 00:58:44.692702 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:58:44.692709 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:58:44.692716 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:58:44.692724 | orchestrator | 2026-03-09 00:58:44.692735 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-09 00:58:44.692742 | orchestrator | Monday 09 March 2026 00:56:41 +0000 (0:00:01.114) 0:01:03.001 ********** 2026-03-09 00:58:44.692750 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:58:44.692757 | orchestrator | 2026-03-09 00:58:44.692764 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-09 00:58:44.692772 | orchestrator | Monday 09 March 2026 00:56:49 +0000 (0:00:08.526) 0:01:11.527 ********** 2026-03-09 00:58:44.692799 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:58:44.692808 | orchestrator | 2026-03-09 00:58:44.692815 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-09 00:58:44.692822 | orchestrator | Monday 09 March 2026 00:56:51 +0000 (0:00:01.581) 0:01:13.109 ********** 2026-03-09 00:58:44.692830 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:58:44.692837 | orchestrator | 2026-03-09 00:58:44.692844 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-09 00:58:44.692851 | orchestrator | Monday 09 March 2026 00:56:53 +0000 (0:00:02.407) 0:01:15.516 ********** 2026-03-09 00:58:44.692859 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:58:44.692866 | orchestrator | 2026-03-09 00:58:44.692874 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-09 00:58:44.692881 | orchestrator | Monday 09 March 2026 00:56:53 +0000 (0:00:00.124) 0:01:15.641 ********** 2026-03-09 00:58:44.692888 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:58:44.692896 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:58:44.692903 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:58:44.692910 | orchestrator | 2026-03-09 00:58:44.692917 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-09 00:58:44.692925 | orchestrator | Monday 09 March 2026 00:56:54 +0000 (0:00:00.284) 0:01:15.926 ********** 2026-03-09 00:58:44.692937 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:58:44.692944 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:58:44.692952 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:58:44.692959 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-09 00:58:44.692966 | orchestrator | 2026-03-09 00:58:44.692973 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-09 00:58:44.692981 | orchestrator | skipping: no hosts matched 2026-03-09 00:58:44.692988 | orchestrator | 2026-03-09 00:58:44.692995 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-09 00:58:44.693003 | orchestrator | 2026-03-09 00:58:44.693010 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-09 00:58:44.693017 | orchestrator | Monday 09 March 2026 00:56:54 +0000 (0:00:00.460) 0:01:16.386 ********** 2026-03-09 00:58:44.693025 | orchestrator | changed: [testbed-node-1] 2026-03-09 00:58:44.693032 | orchestrator | 2026-03-09 00:58:44.693042 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-09 00:58:44.693054 | orchestrator | Monday 09 March 2026 00:57:15 +0000 (0:00:21.236) 0:01:37.623 ********** 2026-03-09 00:58:44.693067 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:58:44.693080 | orchestrator | 2026-03-09 00:58:44.693092 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-09 00:58:44.693104 | orchestrator | Monday 09 March 2026 00:57:27 +0000 (0:00:11.648) 0:01:49.272 ********** 2026-03-09 00:58:44.693116 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:58:44.693126 | orchestrator | 2026-03-09 00:58:44.693133 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-09 00:58:44.693142 | orchestrator | 2026-03-09 00:58:44.693150 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-09 00:58:44.693159 | orchestrator | Monday 09 March 2026 00:57:30 +0000 (0:00:02.688) 0:01:51.961 ********** 2026-03-09 00:58:44.693168 | orchestrator | changed: [testbed-node-2] 2026-03-09 00:58:44.693176 | orchestrator | 2026-03-09 00:58:44.693185 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-09 00:58:44.693194 | orchestrator | Monday 09 March 2026 00:57:54 +0000 (0:00:24.029) 0:02:15.990 ********** 2026-03-09 00:58:44.693203 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:58:44.693211 | orchestrator | 2026-03-09 00:58:44.693220 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-09 00:58:44.693228 | orchestrator | Monday 09 March 2026 00:58:05 +0000 (0:00:11.545) 0:02:27.536 ********** 2026-03-09 00:58:44.693237 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:58:44.693246 | orchestrator | 2026-03-09 00:58:44.693254 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-09 00:58:44.693263 | orchestrator | 2026-03-09 00:58:44.693277 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-09 00:58:44.693299 | orchestrator | Monday 09 March 2026 00:58:08 +0000 (0:00:02.659) 0:02:30.196 ********** 2026-03-09 00:58:44.693308 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:58:44.693317 | orchestrator | 2026-03-09 00:58:44.693325 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-09 00:58:44.693334 | orchestrator | Monday 09 March 2026 00:58:26 +0000 (0:00:17.870) 0:02:48.067 ********** 2026-03-09 00:58:44.693343 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:58:44.693352 | orchestrator | 2026-03-09 00:58:44.693361 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-09 00:58:44.693370 | orchestrator | Monday 09 March 2026 00:58:26 +0000 (0:00:00.635) 0:02:48.702 ********** 2026-03-09 00:58:44.693378 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:58:44.693387 | orchestrator | 2026-03-09 00:58:44.693396 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-09 00:58:44.693405 | orchestrator | 2026-03-09 00:58:44.693413 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-09 00:58:44.693430 | orchestrator | Monday 09 March 2026 00:58:29 +0000 (0:00:02.981) 0:02:51.684 ********** 2026-03-09 00:58:44.693439 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 00:58:44.693448 | orchestrator | 2026-03-09 00:58:44.693456 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-09 00:58:44.693465 | orchestrator | Monday 09 March 2026 00:58:30 +0000 (0:00:00.608) 0:02:52.292 ********** 2026-03-09 00:58:44.693474 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:58:44.693487 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:58:44.693496 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:58:44.693505 | orchestrator | 2026-03-09 00:58:44.693514 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-09 00:58:44.693523 | orchestrator | Monday 09 March 2026 00:58:32 +0000 (0:00:02.572) 0:02:54.865 ********** 2026-03-09 00:58:44.693532 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:58:44.693540 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:58:44.693549 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:58:44.693558 | orchestrator | 2026-03-09 00:58:44.693567 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-09 00:58:44.693576 | orchestrator | Monday 09 March 2026 00:58:35 +0000 (0:00:02.503) 0:02:57.368 ********** 2026-03-09 00:58:44.693584 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:58:44.693593 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:58:44.693602 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:58:44.693610 | orchestrator | 2026-03-09 00:58:44.693619 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-09 00:58:44.693628 | orchestrator | Monday 09 March 2026 00:58:37 +0000 (0:00:02.028) 0:02:59.396 ********** 2026-03-09 00:58:44.693637 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:58:44.693645 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:58:44.693654 | orchestrator | changed: [testbed-node-0] 2026-03-09 00:58:44.693663 | orchestrator | 2026-03-09 00:58:44.693672 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-09 00:58:44.693680 | orchestrator | Monday 09 March 2026 00:58:39 +0000 (0:00:02.227) 0:03:01.624 ********** 2026-03-09 00:58:44.693689 | orchestrator | ok: [testbed-node-0] 2026-03-09 00:58:44.693698 | orchestrator | ok: [testbed-node-1] 2026-03-09 00:58:44.693707 | orchestrator | ok: [testbed-node-2] 2026-03-09 00:58:44.693715 | orchestrator | 2026-03-09 00:58:44.693724 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-09 00:58:44.693733 | orchestrator | Monday 09 March 2026 00:58:43 +0000 (0:00:03.494) 0:03:05.118 ********** 2026-03-09 00:58:44.693742 | orchestrator | skipping: [testbed-node-0] 2026-03-09 00:58:44.693750 | orchestrator | skipping: [testbed-node-1] 2026-03-09 00:58:44.693759 | orchestrator | skipping: [testbed-node-2] 2026-03-09 00:58:44.693768 | orchestrator | 2026-03-09 00:58:44.693777 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:58:44.693805 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-09 00:58:44.693815 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-09 00:58:44.693825 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-09 00:58:44.693834 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-09 00:58:44.693843 | orchestrator | 2026-03-09 00:58:44.693851 | orchestrator | 2026-03-09 00:58:44.693860 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:58:44.693869 | orchestrator | Monday 09 March 2026 00:58:43 +0000 (0:00:00.263) 0:03:05.382 ********** 2026-03-09 00:58:44.693884 | orchestrator | =============================================================================== 2026-03-09 00:58:44.693893 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 45.27s 2026-03-09 00:58:44.693902 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 23.19s 2026-03-09 00:58:44.693910 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 17.87s 2026-03-09 00:58:44.693919 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.12s 2026-03-09 00:58:44.693928 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.56s 2026-03-09 00:58:44.693936 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.53s 2026-03-09 00:58:44.693950 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.35s 2026-03-09 00:58:44.693959 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 4.63s 2026-03-09 00:58:44.693968 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.79s 2026-03-09 00:58:44.693976 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.77s 2026-03-09 00:58:44.693985 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.76s 2026-03-09 00:58:44.693994 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.49s 2026-03-09 00:58:44.694003 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.31s 2026-03-09 00:58:44.694011 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.21s 2026-03-09 00:58:44.694070 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.17s 2026-03-09 00:58:44.694079 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.98s 2026-03-09 00:58:44.694088 | orchestrator | Check MariaDB service --------------------------------------------------- 2.94s 2026-03-09 00:58:44.694097 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.57s 2026-03-09 00:58:44.694105 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.50s 2026-03-09 00:58:44.694114 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.41s 2026-03-09 00:58:44.694127 | orchestrator | 2026-03-09 00:58:44 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:58:44.694137 | orchestrator | 2026-03-09 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:47.749738 | orchestrator | 2026-03-09 00:58:47 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 00:58:47.752600 | orchestrator | 2026-03-09 00:58:47 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:58:47.753944 | orchestrator | 2026-03-09 00:58:47 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 00:58:47.756806 | orchestrator | 2026-03-09 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:50.800018 | orchestrator | 2026-03-09 00:58:50 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 00:58:50.801628 | orchestrator | 2026-03-09 00:58:50 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:58:50.803490 | orchestrator | 2026-03-09 00:58:50 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 00:58:50.803943 | orchestrator | 2026-03-09 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:53.840957 | orchestrator | 2026-03-09 00:58:53 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 00:58:53.841989 | orchestrator | 2026-03-09 00:58:53 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:58:53.842633 | orchestrator | 2026-03-09 00:58:53 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 00:58:53.842791 | orchestrator | 2026-03-09 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:56.886658 | orchestrator | 2026-03-09 00:58:56 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 00:58:56.886743 | orchestrator | 2026-03-09 00:58:56 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:58:56.888301 | orchestrator | 2026-03-09 00:58:56 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 00:58:56.888425 | orchestrator | 2026-03-09 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:58:59.925523 | orchestrator | 2026-03-09 00:58:59 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 00:58:59.925611 | orchestrator | 2026-03-09 00:58:59 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:58:59.926131 | orchestrator | 2026-03-09 00:58:59 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 00:58:59.926164 | orchestrator | 2026-03-09 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:02.974756 | orchestrator | 2026-03-09 00:59:02 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 00:59:02.977241 | orchestrator | 2026-03-09 00:59:02 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:59:02.978639 | orchestrator | 2026-03-09 00:59:02 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 00:59:02.979033 | orchestrator | 2026-03-09 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:06.021949 | orchestrator | 2026-03-09 00:59:06 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 00:59:06.023113 | orchestrator | 2026-03-09 00:59:06 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:59:06.025428 | orchestrator | 2026-03-09 00:59:06 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 00:59:06.025794 | orchestrator | 2026-03-09 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:09.077574 | orchestrator | 2026-03-09 00:59:09 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 00:59:09.078860 | orchestrator | 2026-03-09 00:59:09 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:59:09.080675 | orchestrator | 2026-03-09 00:59:09 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 00:59:09.081088 | orchestrator | 2026-03-09 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:12.127147 | orchestrator | 2026-03-09 00:59:12 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 00:59:12.129131 | orchestrator | 2026-03-09 00:59:12 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:59:12.129839 | orchestrator | 2026-03-09 00:59:12 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 00:59:12.129922 | orchestrator | 2026-03-09 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:15.184266 | orchestrator | 2026-03-09 00:59:15 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 00:59:15.188571 | orchestrator | 2026-03-09 00:59:15 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:59:15.190420 | orchestrator | 2026-03-09 00:59:15 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 00:59:15.190506 | orchestrator | 2026-03-09 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:18.230111 | orchestrator | 2026-03-09 00:59:18 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 00:59:18.231586 | orchestrator | 2026-03-09 00:59:18 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:59:18.231677 | orchestrator | 2026-03-09 00:59:18 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 00:59:18.231693 | orchestrator | 2026-03-09 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:21.270542 | orchestrator | 2026-03-09 00:59:21 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 00:59:21.272807 | orchestrator | 2026-03-09 00:59:21 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:59:21.276155 | orchestrator | 2026-03-09 00:59:21 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 00:59:21.276223 | orchestrator | 2026-03-09 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:24.321600 | orchestrator | 2026-03-09 00:59:24 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 00:59:24.322910 | orchestrator | 2026-03-09 00:59:24 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:59:24.324299 | orchestrator | 2026-03-09 00:59:24 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 00:59:24.324341 | orchestrator | 2026-03-09 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:27.377285 | orchestrator | 2026-03-09 00:59:27 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 00:59:27.379986 | orchestrator | 2026-03-09 00:59:27 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:59:27.382371 | orchestrator | 2026-03-09 00:59:27 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 00:59:27.382446 | orchestrator | 2026-03-09 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:30.435505 | orchestrator | 2026-03-09 00:59:30 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 00:59:30.437117 | orchestrator | 2026-03-09 00:59:30 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:59:30.438384 | orchestrator | 2026-03-09 00:59:30 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 00:59:30.438431 | orchestrator | 2026-03-09 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:33.479046 | orchestrator | 2026-03-09 00:59:33 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 00:59:33.481337 | orchestrator | 2026-03-09 00:59:33 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:59:33.483458 | orchestrator | 2026-03-09 00:59:33 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 00:59:33.483498 | orchestrator | 2026-03-09 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:36.522652 | orchestrator | 2026-03-09 00:59:36 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 00:59:36.523262 | orchestrator | 2026-03-09 00:59:36 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:59:36.524919 | orchestrator | 2026-03-09 00:59:36 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 00:59:36.524950 | orchestrator | 2026-03-09 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:39.578578 | orchestrator | 2026-03-09 00:59:39 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 00:59:39.579956 | orchestrator | 2026-03-09 00:59:39 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:59:39.583608 | orchestrator | 2026-03-09 00:59:39 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 00:59:39.583659 | orchestrator | 2026-03-09 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:42.630371 | orchestrator | 2026-03-09 00:59:42 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 00:59:42.631136 | orchestrator | 2026-03-09 00:59:42 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:59:42.633829 | orchestrator | 2026-03-09 00:59:42 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 00:59:42.633875 | orchestrator | 2026-03-09 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:45.699275 | orchestrator | 2026-03-09 00:59:45 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 00:59:45.701676 | orchestrator | 2026-03-09 00:59:45 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:59:45.704737 | orchestrator | 2026-03-09 00:59:45 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 00:59:45.705076 | orchestrator | 2026-03-09 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:48.748158 | orchestrator | 2026-03-09 00:59:48 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 00:59:48.748893 | orchestrator | 2026-03-09 00:59:48 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state STARTED 2026-03-09 00:59:48.750080 | orchestrator | 2026-03-09 00:59:48 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 00:59:48.750296 | orchestrator | 2026-03-09 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:51.779895 | orchestrator | 2026-03-09 00:59:51 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 00:59:51.782396 | orchestrator | 2026-03-09 00:59:51 | INFO  | Task 03ac4093-6d77-471d-9138-530043ac4275 is in state SUCCESS 2026-03-09 00:59:51.783933 | orchestrator | 2026-03-09 00:59:51.783964 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-09 00:59:51.783972 | orchestrator | 2.16.14 2026-03-09 00:59:51.783978 | orchestrator | 2026-03-09 00:59:51.783985 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-09 00:59:51.783991 | orchestrator | 2026-03-09 00:59:51.783997 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-09 00:59:51.784010 | orchestrator | Monday 09 March 2026 00:57:36 +0000 (0:00:00.638) 0:00:00.638 ********** 2026-03-09 00:59:51.784016 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:51.784056 | orchestrator | 2026-03-09 00:59:51.784063 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-09 00:59:51.784069 | orchestrator | Monday 09 March 2026 00:57:37 +0000 (0:00:00.730) 0:00:01.368 ********** 2026-03-09 00:59:51.784081 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:51.784088 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:51.784099 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:51.784105 | orchestrator | 2026-03-09 00:59:51.784111 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-09 00:59:51.784117 | orchestrator | Monday 09 March 2026 00:57:38 +0000 (0:00:00.673) 0:00:02.041 ********** 2026-03-09 00:59:51.784123 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:51.784189 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:51.784202 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:51.784213 | orchestrator | 2026-03-09 00:59:51.784219 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-09 00:59:51.784224 | orchestrator | Monday 09 March 2026 00:57:38 +0000 (0:00:00.334) 0:00:02.376 ********** 2026-03-09 00:59:51.784230 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:51.784235 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:51.784241 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:51.784246 | orchestrator | 2026-03-09 00:59:51.784252 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-09 00:59:51.784257 | orchestrator | Monday 09 March 2026 00:57:39 +0000 (0:00:00.905) 0:00:03.282 ********** 2026-03-09 00:59:51.784263 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:51.784268 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:51.784274 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:51.784279 | orchestrator | 2026-03-09 00:59:51.784319 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-09 00:59:51.784325 | orchestrator | Monday 09 March 2026 00:57:39 +0000 (0:00:00.328) 0:00:03.610 ********** 2026-03-09 00:59:51.784336 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:51.784342 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:51.784347 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:51.784353 | orchestrator | 2026-03-09 00:59:51.784358 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-09 00:59:51.784364 | orchestrator | Monday 09 March 2026 00:57:40 +0000 (0:00:00.318) 0:00:03.929 ********** 2026-03-09 00:59:51.784369 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:51.784375 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:51.784755 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:51.784770 | orchestrator | 2026-03-09 00:59:51.784776 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-09 00:59:51.784782 | orchestrator | Monday 09 March 2026 00:57:40 +0000 (0:00:00.370) 0:00:04.300 ********** 2026-03-09 00:59:51.784787 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.784793 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:51.784799 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:51.784804 | orchestrator | 2026-03-09 00:59:51.784817 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-09 00:59:51.784823 | orchestrator | Monday 09 March 2026 00:57:41 +0000 (0:00:00.544) 0:00:04.844 ********** 2026-03-09 00:59:51.784829 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:51.784841 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:51.784847 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:51.784852 | orchestrator | 2026-03-09 00:59:51.784858 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-09 00:59:51.784863 | orchestrator | Monday 09 March 2026 00:57:41 +0000 (0:00:00.300) 0:00:05.144 ********** 2026-03-09 00:59:51.784869 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 00:59:51.784874 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 00:59:51.784880 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 00:59:51.784885 | orchestrator | 2026-03-09 00:59:51.784890 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-09 00:59:51.784897 | orchestrator | Monday 09 March 2026 00:57:42 +0000 (0:00:00.857) 0:00:06.002 ********** 2026-03-09 00:59:51.784906 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:51.784914 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:51.784919 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:51.784925 | orchestrator | 2026-03-09 00:59:51.784930 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-09 00:59:51.784935 | orchestrator | Monday 09 March 2026 00:57:42 +0000 (0:00:00.508) 0:00:06.511 ********** 2026-03-09 00:59:51.784941 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 00:59:51.784953 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 00:59:51.784959 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 00:59:51.784964 | orchestrator | 2026-03-09 00:59:51.784970 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-09 00:59:51.784975 | orchestrator | Monday 09 March 2026 00:57:44 +0000 (0:00:02.195) 0:00:08.706 ********** 2026-03-09 00:59:51.784988 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-09 00:59:51.784993 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-09 00:59:51.784999 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-09 00:59:51.785005 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.785010 | orchestrator | 2026-03-09 00:59:51.785041 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-09 00:59:51.785048 | orchestrator | Monday 09 March 2026 00:57:45 +0000 (0:00:00.718) 0:00:09.425 ********** 2026-03-09 00:59:51.785055 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.785062 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.785082 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.785088 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.785093 | orchestrator | 2026-03-09 00:59:51.785099 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-09 00:59:51.785105 | orchestrator | Monday 09 March 2026 00:57:46 +0000 (0:00:00.877) 0:00:10.302 ********** 2026-03-09 00:59:51.785111 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.785119 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.785128 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.785134 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.785140 | orchestrator | 2026-03-09 00:59:51.785145 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-09 00:59:51.785151 | orchestrator | Monday 09 March 2026 00:57:46 +0000 (0:00:00.375) 0:00:10.678 ********** 2026-03-09 00:59:51.785164 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '4bae6e15e9de', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-09 00:57:43.552735', 'end': '2026-03-09 00:57:43.591354', 'delta': '0:00:00.038619', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4bae6e15e9de'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-09 00:59:51.785177 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b2a51813d243', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-09 00:57:44.300857', 'end': '2026-03-09 00:57:44.334486', 'delta': '0:00:00.033629', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b2a51813d243'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-09 00:59:51.785202 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c6bfc84203a6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-09 00:57:44.811203', 'end': '2026-03-09 00:57:44.866422', 'delta': '0:00:00.055219', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c6bfc84203a6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-09 00:59:51.785209 | orchestrator | 2026-03-09 00:59:51.785214 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-09 00:59:51.785220 | orchestrator | Monday 09 March 2026 00:57:47 +0000 (0:00:00.211) 0:00:10.890 ********** 2026-03-09 00:59:51.785225 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:51.785231 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:51.785243 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:51.785248 | orchestrator | 2026-03-09 00:59:51.785254 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-09 00:59:51.785259 | orchestrator | Monday 09 March 2026 00:57:47 +0000 (0:00:00.454) 0:00:11.345 ********** 2026-03-09 00:59:51.785265 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-09 00:59:51.785270 | orchestrator | 2026-03-09 00:59:51.785276 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-09 00:59:51.785281 | orchestrator | Monday 09 March 2026 00:57:49 +0000 (0:00:01.889) 0:00:13.234 ********** 2026-03-09 00:59:51.785287 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.785292 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:51.785297 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:51.785303 | orchestrator | 2026-03-09 00:59:51.785308 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-09 00:59:51.785314 | orchestrator | Monday 09 March 2026 00:57:49 +0000 (0:00:00.331) 0:00:13.566 ********** 2026-03-09 00:59:51.785319 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.785325 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:51.785330 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:51.785336 | orchestrator | 2026-03-09 00:59:51.785342 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-09 00:59:51.785349 | orchestrator | Monday 09 March 2026 00:57:50 +0000 (0:00:00.501) 0:00:14.067 ********** 2026-03-09 00:59:51.785361 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.785367 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:51.785373 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:51.785380 | orchestrator | 2026-03-09 00:59:51.785386 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-09 00:59:51.785393 | orchestrator | Monday 09 March 2026 00:57:50 +0000 (0:00:00.533) 0:00:14.600 ********** 2026-03-09 00:59:51.785399 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:51.785405 | orchestrator | 2026-03-09 00:59:51.785412 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-09 00:59:51.785421 | orchestrator | Monday 09 March 2026 00:57:51 +0000 (0:00:00.142) 0:00:14.743 ********** 2026-03-09 00:59:51.785428 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.785434 | orchestrator | 2026-03-09 00:59:51.785441 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-09 00:59:51.785447 | orchestrator | Monday 09 March 2026 00:57:51 +0000 (0:00:00.237) 0:00:14.980 ********** 2026-03-09 00:59:51.785459 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.785465 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:51.785472 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:51.785478 | orchestrator | 2026-03-09 00:59:51.785484 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-09 00:59:51.785490 | orchestrator | Monday 09 March 2026 00:57:51 +0000 (0:00:00.315) 0:00:15.296 ********** 2026-03-09 00:59:51.785497 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.785503 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:51.785515 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:51.785521 | orchestrator | 2026-03-09 00:59:51.785527 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-09 00:59:51.785534 | orchestrator | Monday 09 March 2026 00:57:51 +0000 (0:00:00.343) 0:00:15.640 ********** 2026-03-09 00:59:51.785540 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.785546 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:51.785552 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:51.785559 | orchestrator | 2026-03-09 00:59:51.785565 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-09 00:59:51.785571 | orchestrator | Monday 09 March 2026 00:57:52 +0000 (0:00:00.584) 0:00:16.224 ********** 2026-03-09 00:59:51.785578 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.785584 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:51.785596 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:51.785602 | orchestrator | 2026-03-09 00:59:51.785616 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-09 00:59:51.785623 | orchestrator | Monday 09 March 2026 00:57:52 +0000 (0:00:00.339) 0:00:16.564 ********** 2026-03-09 00:59:51.785629 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.785635 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:51.785647 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:51.785654 | orchestrator | 2026-03-09 00:59:51.785660 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-09 00:59:51.785667 | orchestrator | Monday 09 March 2026 00:57:53 +0000 (0:00:00.399) 0:00:16.964 ********** 2026-03-09 00:59:51.785673 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.785679 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:51.785685 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:51.785723 | orchestrator | 2026-03-09 00:59:51.785731 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-09 00:59:51.785737 | orchestrator | Monday 09 March 2026 00:57:53 +0000 (0:00:00.394) 0:00:17.358 ********** 2026-03-09 00:59:51.785742 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.785748 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:51.785759 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:51.785765 | orchestrator | 2026-03-09 00:59:51.785771 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-09 00:59:51.785780 | orchestrator | Monday 09 March 2026 00:57:54 +0000 (0:00:00.583) 0:00:17.941 ********** 2026-03-09 00:59:51.785787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5d9cda85--a301--5b16--a7fe--308b162b7259-osd--block--5d9cda85--a301--5b16--a7fe--308b162b7259', 'dm-uuid-LVM-HMglKMgOarJt39elepRreQ13BbpBTpIwgcHAQSWoKwrA5ROauy6uoqWqljFkY8Uw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.785800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8734b320--4ffe--530d--8e73--0aec819257b4-osd--block--8734b320--4ffe--530d--8e73--0aec819257b4', 'dm-uuid-LVM-0oRFpggrbg2gDDWUKFXLRyOv3OVjB5p678FZlGpzndE4EOgbqu12F7mdcfnww5Ot'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.785806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.785820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.785827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.785833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.785849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.785883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.785900 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.785911 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.785932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa', 'scsi-SQEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:51.785944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5d9cda85--a301--5b16--a7fe--308b162b7259-osd--block--5d9cda85--a301--5b16--a7fe--308b162b7259'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-m7qQkp-HCAP-ekOq-9sXu-j33Q-bbOW-LIzZw2', 'scsi-0QEMU_QEMU_HARDDISK_26907958-5014-4e4e-aaae-f132ebc9345b', 'scsi-SQEMU_QEMU_HARDDISK_26907958-5014-4e4e-aaae-f132ebc9345b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:51.785983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8734b320--4ffe--530d--8e73--0aec819257b4-osd--block--8734b320--4ffe--530d--8e73--0aec819257b4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-n1IWuy-ahs9-DtYW-xlXK-0evh-ueMl-JlfPEM', 'scsi-0QEMU_QEMU_HARDDISK_763f54df-2df6-4a17-b758-6e7498448fae', 'scsi-SQEMU_QEMU_HARDDISK_763f54df-2df6-4a17-b758-6e7498448fae'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:51.786064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49ad7546-ef2d-4696-ae5b-c2e2e05846ff', 'scsi-SQEMU_QEMU_HARDDISK_49ad7546-ef2d-4696-ae5b-c2e2e05846ff'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:51.786079 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--deb603ca--2db3--5399--8e8d--1e0d01641e0c-osd--block--deb603ca--2db3--5399--8e8d--1e0d01641e0c', 'dm-uuid-LVM-ymUw0TIiv27vbmGZKzqUO1xTKJjd4LELlXUeXZ0R5xZpaLUzedLAgzLI2r7WHUmD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.786089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-02-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:51.786095 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c1f67558--6290--50a7--9c09--ea5e74fb08ab-osd--block--c1f67558--6290--50a7--9c09--ea5e74fb08ab', 'dm-uuid-LVM-YsXR6FhgZvrm6EivKPjX3dlWMAJuQcNNm5yd8wUg87KYebMgLJonznJrEwWBLQt0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.786101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.786107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.786149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.786157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.786162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.786168 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.786174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.786180 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.786193 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2', 'scsi-SQEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part1', 'scsi-SQEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part14', 'scsi-SQEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part15', 'scsi-SQEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part16', 'scsi-SQEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:51.786204 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.786210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--deb603ca--2db3--5399--8e8d--1e0d01641e0c-osd--block--deb603ca--2db3--5399--8e8d--1e0d01641e0c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4hCwG2-6dQd-RLGd-XZAt-F0Bt-0Qyo-ciHyUq', 'scsi-0QEMU_QEMU_HARDDISK_11658218-3952-45bc-99ae-d48f4d257268', 'scsi-SQEMU_QEMU_HARDDISK_11658218-3952-45bc-99ae-d48f4d257268'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:51.786216 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c1f67558--6290--50a7--9c09--ea5e74fb08ab-osd--block--c1f67558--6290--50a7--9c09--ea5e74fb08ab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bgCf24-uSBe-cwb8-qv4r-q8a4-cjFj-uwPZpD', 'scsi-0QEMU_QEMU_HARDDISK_d43c938e-9c3c-4e95-bc09-26edff92b810', 'scsi-SQEMU_QEMU_HARDDISK_d43c938e-9c3c-4e95-bc09-26edff92b810'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:51.786224 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a13d83a-3534-4183-8691-9f150495a6dc', 'scsi-SQEMU_QEMU_HARDDISK_3a13d83a-3534-4183-8691-9f150495a6dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:51.786230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-02-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:51.786235 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:51.786241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5d8e344b--ecd1--5c90--b783--cb125ac7004a-osd--block--5d8e344b--ecd1--5c90--b783--cb125ac7004a', 'dm-uuid-LVM-orG5ExLC2iY5BVLcplh0u9DLThIpvNX3KJaplDmRqeZKenRtB4QpeuCWOXw3PgzH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.786256 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d6be2487--d224--518f--9009--30806e6fa587-osd--block--d6be2487--d224--518f--9009--30806e6fa587', 'dm-uuid-LVM-KL2LYmO1kTxlUYYUFh2gjCBXeECjhxCTJB1356ftbeK9beZSHRhKJu7vShqwZTE5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.786262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.786268 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.786273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.786279 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.786285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.786293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.786306 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.786315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-09 00:59:51.786325 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a', 'scsi-SQEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part1', 'scsi-SQEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part14', 'scsi-SQEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part15', 'scsi-SQEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part16', 'scsi-SQEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:51.786334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5d8e344b--ecd1--5c90--b783--cb125ac7004a-osd--block--5d8e344b--ecd1--5c90--b783--cb125ac7004a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tkYHv1-MxvC-0X6I-bNAj-IW5c-doAI-n0j1mI', 'scsi-0QEMU_QEMU_HARDDISK_34bdd215-cdf5-4909-8dd4-972bf1b79030', 'scsi-SQEMU_QEMU_HARDDISK_34bdd215-cdf5-4909-8dd4-972bf1b79030'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:51.786340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d6be2487--d224--518f--9009--30806e6fa587-osd--block--d6be2487--d224--518f--9009--30806e6fa587'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hAH1cU-weZP-I8mi-YRcA-iLqE-N7sJ-khZnrk', 'scsi-0QEMU_QEMU_HARDDISK_709b939c-9ac4-47b1-b5c3-cb1d8710b2fd', 'scsi-SQEMU_QEMU_HARDDISK_709b939c-9ac4-47b1-b5c3-cb1d8710b2fd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:51.786349 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_069ee836-7f84-4f9f-9b43-0fd45db025c2', 'scsi-SQEMU_QEMU_HARDDISK_069ee836-7f84-4f9f-9b43-0fd45db025c2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:51.786358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-02-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-09 00:59:51.786364 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:51.786370 | orchestrator | 2026-03-09 00:59:51.786376 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-09 00:59:51.786381 | orchestrator | Monday 09 March 2026 00:57:54 +0000 (0:00:00.695) 0:00:18.637 ********** 2026-03-09 00:59:51.786387 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5d9cda85--a301--5b16--a7fe--308b162b7259-osd--block--5d9cda85--a301--5b16--a7fe--308b162b7259', 'dm-uuid-LVM-HMglKMgOarJt39elepRreQ13BbpBTpIwgcHAQSWoKwrA5ROauy6uoqWqljFkY8Uw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786394 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8734b320--4ffe--530d--8e73--0aec819257b4-osd--block--8734b320--4ffe--530d--8e73--0aec819257b4', 'dm-uuid-LVM-0oRFpggrbg2gDDWUKFXLRyOv3OVjB5p678FZlGpzndE4EOgbqu12F7mdcfnww5Ot'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786402 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786409 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786418 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786429 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786436 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786442 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786448 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786457 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786475 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--deb603ca--2db3--5399--8e8d--1e0d01641e0c-osd--block--deb603ca--2db3--5399--8e8d--1e0d01641e0c', 'dm-uuid-LVM-ymUw0TIiv27vbmGZKzqUO1xTKJjd4LELlXUeXZ0R5xZpaLUzedLAgzLI2r7WHUmD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786486 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa', 'scsi-SQEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_bfa8e3a8-0734-434c-abec-79aad619d4fa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786496 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c1f67558--6290--50a7--9c09--ea5e74fb08ab-osd--block--c1f67558--6290--50a7--9c09--ea5e74fb08ab', 'dm-uuid-LVM-YsXR6FhgZvrm6EivKPjX3dlWMAJuQcNNm5yd8wUg87KYebMgLJonznJrEwWBLQt0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786506 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--5d9cda85--a301--5b16--a7fe--308b162b7259-osd--block--5d9cda85--a301--5b16--a7fe--308b162b7259'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-m7qQkp-HCAP-ekOq-9sXu-j33Q-bbOW-LIzZw2', 'scsi-0QEMU_QEMU_HARDDISK_26907958-5014-4e4e-aaae-f132ebc9345b', 'scsi-SQEMU_QEMU_HARDDISK_26907958-5014-4e4e-aaae-f132ebc9345b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786516 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786522 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8734b320--4ffe--530d--8e73--0aec819257b4-osd--block--8734b320--4ffe--530d--8e73--0aec819257b4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-n1IWuy-ahs9-DtYW-xlXK-0evh-ueMl-JlfPEM', 'scsi-0QEMU_QEMU_HARDDISK_763f54df-2df6-4a17-b758-6e7498448fae', 'scsi-SQEMU_QEMU_HARDDISK_763f54df-2df6-4a17-b758-6e7498448fae'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786528 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786537 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49ad7546-ef2d-4696-ae5b-c2e2e05846ff', 'scsi-SQEMU_QEMU_HARDDISK_49ad7546-ef2d-4696-ae5b-c2e2e05846ff'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786546 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786552 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-02-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786559 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.786568 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786575 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786581 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786587 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786598 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786609 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2', 'scsi-SQEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part1', 'scsi-SQEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part14', 'scsi-SQEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part15', 'scsi-SQEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part16', 'scsi-SQEMU_QEMU_HARDDISK_d4e9789d-4787-4538-a188-9409f1cddce2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786622 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5d8e344b--ecd1--5c90--b783--cb125ac7004a-osd--block--5d8e344b--ecd1--5c90--b783--cb125ac7004a', 'dm-uuid-LVM-orG5ExLC2iY5BVLcplh0u9DLThIpvNX3KJaplDmRqeZKenRtB4QpeuCWOXw3PgzH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786631 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--deb603ca--2db3--5399--8e8d--1e0d01641e0c-osd--block--deb603ca--2db3--5399--8e8d--1e0d01641e0c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4hCwG2-6dQd-RLGd-XZAt-F0Bt-0Qyo-ciHyUq', 'scsi-0QEMU_QEMU_HARDDISK_11658218-3952-45bc-99ae-d48f4d257268', 'scsi-SQEMU_QEMU_HARDDISK_11658218-3952-45bc-99ae-d48f4d257268'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786641 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d6be2487--d224--518f--9009--30806e6fa587-osd--block--d6be2487--d224--518f--9009--30806e6fa587', 'dm-uuid-LVM-KL2LYmO1kTxlUYYUFh2gjCBXeECjhxCTJB1356ftbeK9beZSHRhKJu7vShqwZTE5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786651 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c1f67558--6290--50a7--9c09--ea5e74fb08ab-osd--block--c1f67558--6290--50a7--9c09--ea5e74fb08ab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bgCf24-uSBe-cwb8-qv4r-q8a4-cjFj-uwPZpD', 'scsi-0QEMU_QEMU_HARDDISK_d43c938e-9c3c-4e95-bc09-26edff92b810', 'scsi-SQEMU_QEMU_HARDDISK_d43c938e-9c3c-4e95-bc09-26edff92b810'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786657 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786663 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a13d83a-3534-4183-8691-9f150495a6dc', 'scsi-SQEMU_QEMU_HARDDISK_3a13d83a-3534-4183-8691-9f150495a6dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786672 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786681 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-02-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786694 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:51.786700 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786732 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786740 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786747 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786753 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786765 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786777 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a', 'scsi-SQEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part1', 'scsi-SQEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part14', 'scsi-SQEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part15', 'scsi-SQEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part16', 'scsi-SQEMU_QEMU_HARDDISK_6df45922-8f75-4a42-8a21-8a577e31863a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786784 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--5d8e344b--ecd1--5c90--b783--cb125ac7004a-osd--block--5d8e344b--ecd1--5c90--b783--cb125ac7004a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tkYHv1-MxvC-0X6I-bNAj-IW5c-doAI-n0j1mI', 'scsi-0QEMU_QEMU_HARDDISK_34bdd215-cdf5-4909-8dd4-972bf1b79030', 'scsi-SQEMU_QEMU_HARDDISK_34bdd215-cdf5-4909-8dd4-972bf1b79030'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786803 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d6be2487--d224--518f--9009--30806e6fa587-osd--block--d6be2487--d224--518f--9009--30806e6fa587'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hAH1cU-weZP-I8mi-YRcA-iLqE-N7sJ-khZnrk', 'scsi-0QEMU_QEMU_HARDDISK_709b939c-9ac4-47b1-b5c3-cb1d8710b2fd', 'scsi-SQEMU_QEMU_HARDDISK_709b939c-9ac4-47b1-b5c3-cb1d8710b2fd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786810 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_069ee836-7f84-4f9f-9b43-0fd45db025c2', 'scsi-SQEMU_QEMU_HARDDISK_069ee836-7f84-4f9f-9b43-0fd45db025c2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786820 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-09-00-02-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-09 00:59:51.786826 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:51.786832 | orchestrator | 2026-03-09 00:59:51.786838 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-09 00:59:51.786844 | orchestrator | Monday 09 March 2026 00:57:55 +0000 (0:00:00.696) 0:00:19.334 ********** 2026-03-09 00:59:51.786850 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:51.786856 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:51.786862 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:51.786868 | orchestrator | 2026-03-09 00:59:51.786874 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-09 00:59:51.786879 | orchestrator | Monday 09 March 2026 00:57:56 +0000 (0:00:00.715) 0:00:20.050 ********** 2026-03-09 00:59:51.786885 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:51.786891 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:51.786897 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:51.786902 | orchestrator | 2026-03-09 00:59:51.786908 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-09 00:59:51.786914 | orchestrator | Monday 09 March 2026 00:57:56 +0000 (0:00:00.555) 0:00:20.605 ********** 2026-03-09 00:59:51.786925 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:51.786931 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:51.786937 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:51.786942 | orchestrator | 2026-03-09 00:59:51.786948 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-09 00:59:51.786956 | orchestrator | Monday 09 March 2026 00:57:57 +0000 (0:00:00.680) 0:00:21.285 ********** 2026-03-09 00:59:51.786966 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.786975 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:51.786985 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:51.786994 | orchestrator | 2026-03-09 00:59:51.787013 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-09 00:59:51.787024 | orchestrator | Monday 09 March 2026 00:57:57 +0000 (0:00:00.320) 0:00:21.606 ********** 2026-03-09 00:59:51.787032 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.787042 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:51.787051 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:51.787060 | orchestrator | 2026-03-09 00:59:51.787069 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-09 00:59:51.787078 | orchestrator | Monday 09 March 2026 00:57:58 +0000 (0:00:00.454) 0:00:22.061 ********** 2026-03-09 00:59:51.787087 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.787105 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:51.787114 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:51.787123 | orchestrator | 2026-03-09 00:59:51.787131 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-09 00:59:51.787140 | orchestrator | Monday 09 March 2026 00:57:58 +0000 (0:00:00.562) 0:00:22.624 ********** 2026-03-09 00:59:51.787150 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-09 00:59:51.787160 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-09 00:59:51.787175 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-09 00:59:51.787194 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-09 00:59:51.787204 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-09 00:59:51.787213 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-09 00:59:51.787222 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-09 00:59:51.787232 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-09 00:59:51.787241 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-09 00:59:51.787260 | orchestrator | 2026-03-09 00:59:51.787270 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-09 00:59:51.787280 | orchestrator | Monday 09 March 2026 00:57:59 +0000 (0:00:00.931) 0:00:23.556 ********** 2026-03-09 00:59:51.787290 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-09 00:59:51.787300 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-09 00:59:51.787309 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-09 00:59:51.787318 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.787327 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-09 00:59:51.787337 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-09 00:59:51.787348 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-09 00:59:51.787357 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:51.787367 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-09 00:59:51.787373 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-09 00:59:51.787379 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-09 00:59:51.787384 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:51.787390 | orchestrator | 2026-03-09 00:59:51.787396 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-09 00:59:51.787402 | orchestrator | Monday 09 March 2026 00:58:00 +0000 (0:00:00.422) 0:00:23.979 ********** 2026-03-09 00:59:51.787415 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 00:59:51.787421 | orchestrator | 2026-03-09 00:59:51.787427 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-09 00:59:51.787433 | orchestrator | Monday 09 March 2026 00:58:01 +0000 (0:00:00.844) 0:00:24.824 ********** 2026-03-09 00:59:51.787446 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.787452 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:51.787458 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:51.787472 | orchestrator | 2026-03-09 00:59:51.787478 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-09 00:59:51.787484 | orchestrator | Monday 09 March 2026 00:58:01 +0000 (0:00:00.370) 0:00:25.194 ********** 2026-03-09 00:59:51.787490 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.787495 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:51.787501 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:51.787507 | orchestrator | 2026-03-09 00:59:51.787513 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-09 00:59:51.787518 | orchestrator | Monday 09 March 2026 00:58:01 +0000 (0:00:00.363) 0:00:25.558 ********** 2026-03-09 00:59:51.787524 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.787530 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:51.787536 | orchestrator | skipping: [testbed-node-5] 2026-03-09 00:59:51.787541 | orchestrator | 2026-03-09 00:59:51.787547 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-09 00:59:51.787553 | orchestrator | Monday 09 March 2026 00:58:02 +0000 (0:00:00.362) 0:00:25.921 ********** 2026-03-09 00:59:51.787559 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:51.787564 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:51.787570 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:51.787576 | orchestrator | 2026-03-09 00:59:51.787582 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-09 00:59:51.787588 | orchestrator | Monday 09 March 2026 00:58:03 +0000 (0:00:01.054) 0:00:26.976 ********** 2026-03-09 00:59:51.787593 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:59:51.787599 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:59:51.787606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:59:51.787617 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.787623 | orchestrator | 2026-03-09 00:59:51.787629 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-09 00:59:51.787635 | orchestrator | Monday 09 March 2026 00:58:03 +0000 (0:00:00.429) 0:00:27.405 ********** 2026-03-09 00:59:51.787642 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:59:51.787653 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:59:51.787662 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:59:51.787672 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.787681 | orchestrator | 2026-03-09 00:59:51.787691 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-09 00:59:51.787701 | orchestrator | Monday 09 March 2026 00:58:04 +0000 (0:00:00.416) 0:00:27.821 ********** 2026-03-09 00:59:51.787746 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-09 00:59:51.787758 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-09 00:59:51.787768 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-09 00:59:51.787774 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.787780 | orchestrator | 2026-03-09 00:59:51.787786 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-09 00:59:51.787792 | orchestrator | Monday 09 March 2026 00:58:04 +0000 (0:00:00.420) 0:00:28.242 ********** 2026-03-09 00:59:51.787803 | orchestrator | ok: [testbed-node-3] 2026-03-09 00:59:51.787809 | orchestrator | ok: [testbed-node-4] 2026-03-09 00:59:51.787818 | orchestrator | ok: [testbed-node-5] 2026-03-09 00:59:51.787824 | orchestrator | 2026-03-09 00:59:51.787830 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-09 00:59:51.787836 | orchestrator | Monday 09 March 2026 00:58:04 +0000 (0:00:00.379) 0:00:28.622 ********** 2026-03-09 00:59:51.787842 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-09 00:59:51.787847 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-09 00:59:51.787853 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-09 00:59:51.787859 | orchestrator | 2026-03-09 00:59:51.787864 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-09 00:59:51.787870 | orchestrator | Monday 09 March 2026 00:58:05 +0000 (0:00:00.585) 0:00:29.208 ********** 2026-03-09 00:59:51.787876 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 00:59:51.787882 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 00:59:51.787888 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 00:59:51.787893 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-09 00:59:51.787899 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-09 00:59:51.787905 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-09 00:59:51.787911 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-09 00:59:51.787921 | orchestrator | 2026-03-09 00:59:51.787928 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-09 00:59:51.787934 | orchestrator | Monday 09 March 2026 00:58:06 +0000 (0:00:01.086) 0:00:30.295 ********** 2026-03-09 00:59:51.787940 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-09 00:59:51.787946 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-09 00:59:51.787952 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-09 00:59:51.787958 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-09 00:59:51.787964 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-09 00:59:51.787969 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-09 00:59:51.787980 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-09 00:59:51.787986 | orchestrator | 2026-03-09 00:59:51.787992 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-09 00:59:51.788006 | orchestrator | Monday 09 March 2026 00:58:08 +0000 (0:00:02.150) 0:00:32.445 ********** 2026-03-09 00:59:51.788012 | orchestrator | skipping: [testbed-node-3] 2026-03-09 00:59:51.788018 | orchestrator | skipping: [testbed-node-4] 2026-03-09 00:59:51.788024 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-09 00:59:51.788034 | orchestrator | 2026-03-09 00:59:51.788044 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-09 00:59:51.788053 | orchestrator | Monday 09 March 2026 00:58:09 +0000 (0:00:00.505) 0:00:32.950 ********** 2026-03-09 00:59:51.788065 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-09 00:59:51.788085 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-09 00:59:51.788101 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-09 00:59:51.788111 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-09 00:59:51.788121 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-09 00:59:51.788131 | orchestrator | 2026-03-09 00:59:51.788151 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-09 00:59:51.788162 | orchestrator | Monday 09 March 2026 00:58:55 +0000 (0:00:46.006) 0:01:18.956 ********** 2026-03-09 00:59:51.788171 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:51.788184 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:51.788197 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:51.788203 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:51.788209 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:51.788215 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:51.788220 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-09 00:59:51.788226 | orchestrator | 2026-03-09 00:59:51.788233 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-09 00:59:51.788242 | orchestrator | Monday 09 March 2026 00:59:20 +0000 (0:00:25.079) 0:01:44.036 ********** 2026-03-09 00:59:51.788252 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:51.788271 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:51.788280 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:51.788288 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:51.788298 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:51.788306 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:51.788315 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-09 00:59:51.788324 | orchestrator | 2026-03-09 00:59:51.788343 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-09 00:59:51.788353 | orchestrator | Monday 09 March 2026 00:59:32 +0000 (0:00:12.636) 0:01:56.673 ********** 2026-03-09 00:59:51.788363 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:51.788372 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-09 00:59:51.788382 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-09 00:59:51.788392 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:51.788410 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-09 00:59:51.788428 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-09 00:59:51.788438 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:51.788458 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-09 00:59:51.788468 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-09 00:59:51.788478 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:51.788488 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-09 00:59:51.788498 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-09 00:59:51.788506 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:51.788516 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-09 00:59:51.788523 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-09 00:59:51.788529 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-09 00:59:51.788535 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-09 00:59:51.788541 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-09 00:59:51.788547 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-09 00:59:51.788552 | orchestrator | 2026-03-09 00:59:51.788558 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 00:59:51.788564 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-09 00:59:51.788571 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-09 00:59:51.788577 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-09 00:59:51.788583 | orchestrator | 2026-03-09 00:59:51.788589 | orchestrator | 2026-03-09 00:59:51.788594 | orchestrator | 2026-03-09 00:59:51.788600 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 00:59:51.788606 | orchestrator | Monday 09 March 2026 00:59:50 +0000 (0:00:17.927) 0:02:14.600 ********** 2026-03-09 00:59:51.788612 | orchestrator | =============================================================================== 2026-03-09 00:59:51.788617 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.01s 2026-03-09 00:59:51.788623 | orchestrator | generate keys ---------------------------------------------------------- 25.08s 2026-03-09 00:59:51.788629 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.93s 2026-03-09 00:59:51.788635 | orchestrator | get keys from monitors ------------------------------------------------- 12.64s 2026-03-09 00:59:51.788645 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.20s 2026-03-09 00:59:51.788651 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.15s 2026-03-09 00:59:51.788657 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.89s 2026-03-09 00:59:51.788663 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.09s 2026-03-09 00:59:51.788669 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 1.05s 2026-03-09 00:59:51.788675 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.93s 2026-03-09 00:59:51.788680 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.91s 2026-03-09 00:59:51.788686 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.88s 2026-03-09 00:59:51.788692 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.86s 2026-03-09 00:59:51.788698 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.84s 2026-03-09 00:59:51.788728 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.73s 2026-03-09 00:59:51.788740 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.72s 2026-03-09 00:59:51.788746 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.72s 2026-03-09 00:59:51.788752 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.70s 2026-03-09 00:59:51.788758 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.70s 2026-03-09 00:59:51.788763 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.68s 2026-03-09 00:59:51.788769 | orchestrator | 2026-03-09 00:59:51 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 00:59:51.788775 | orchestrator | 2026-03-09 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:54.812744 | orchestrator | 2026-03-09 00:59:54 | INFO  | Task 52361909-04ce-486f-8388-4c98512c4a60 is in state STARTED 2026-03-09 00:59:54.813671 | orchestrator | 2026-03-09 00:59:54 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 00:59:54.814508 | orchestrator | 2026-03-09 00:59:54 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 00:59:54.814553 | orchestrator | 2026-03-09 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 00:59:57.853173 | orchestrator | 2026-03-09 00:59:57 | INFO  | Task 52361909-04ce-486f-8388-4c98512c4a60 is in state STARTED 2026-03-09 00:59:57.854172 | orchestrator | 2026-03-09 00:59:57 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 00:59:57.856096 | orchestrator | 2026-03-09 00:59:57 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 00:59:57.856534 | orchestrator | 2026-03-09 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:00.892843 | orchestrator | 2026-03-09 01:00:00 | INFO  | Task 52361909-04ce-486f-8388-4c98512c4a60 is in state STARTED 2026-03-09 01:00:00.894426 | orchestrator | 2026-03-09 01:00:00 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 01:00:00.896595 | orchestrator | 2026-03-09 01:00:00 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:00:00.896634 | orchestrator | 2026-03-09 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:03.933206 | orchestrator | 2026-03-09 01:00:03 | INFO  | Task 52361909-04ce-486f-8388-4c98512c4a60 is in state STARTED 2026-03-09 01:00:03.934113 | orchestrator | 2026-03-09 01:00:03 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 01:00:03.936398 | orchestrator | 2026-03-09 01:00:03 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:00:03.936430 | orchestrator | 2026-03-09 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:06.987561 | orchestrator | 2026-03-09 01:00:06 | INFO  | Task 52361909-04ce-486f-8388-4c98512c4a60 is in state STARTED 2026-03-09 01:00:06.988791 | orchestrator | 2026-03-09 01:00:06 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 01:00:06.990750 | orchestrator | 2026-03-09 01:00:06 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:00:06.990827 | orchestrator | 2026-03-09 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:10.040301 | orchestrator | 2026-03-09 01:00:10 | INFO  | Task 52361909-04ce-486f-8388-4c98512c4a60 is in state STARTED 2026-03-09 01:00:10.041960 | orchestrator | 2026-03-09 01:00:10 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 01:00:10.042868 | orchestrator | 2026-03-09 01:00:10 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:00:10.042966 | orchestrator | 2026-03-09 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:13.103099 | orchestrator | 2026-03-09 01:00:13 | INFO  | Task 52361909-04ce-486f-8388-4c98512c4a60 is in state STARTED 2026-03-09 01:00:13.104030 | orchestrator | 2026-03-09 01:00:13 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 01:00:13.105573 | orchestrator | 2026-03-09 01:00:13 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:00:13.106670 | orchestrator | 2026-03-09 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:16.164272 | orchestrator | 2026-03-09 01:00:16 | INFO  | Task 52361909-04ce-486f-8388-4c98512c4a60 is in state STARTED 2026-03-09 01:00:16.165370 | orchestrator | 2026-03-09 01:00:16 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 01:00:16.166387 | orchestrator | 2026-03-09 01:00:16 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:00:16.166420 | orchestrator | 2026-03-09 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:19.204595 | orchestrator | 2026-03-09 01:00:19 | INFO  | Task 52361909-04ce-486f-8388-4c98512c4a60 is in state STARTED 2026-03-09 01:00:19.205955 | orchestrator | 2026-03-09 01:00:19 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 01:00:19.207858 | orchestrator | 2026-03-09 01:00:19 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:00:19.207909 | orchestrator | 2026-03-09 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:22.258441 | orchestrator | 2026-03-09 01:00:22 | INFO  | Task 52361909-04ce-486f-8388-4c98512c4a60 is in state STARTED 2026-03-09 01:00:22.259375 | orchestrator | 2026-03-09 01:00:22 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 01:00:22.261240 | orchestrator | 2026-03-09 01:00:22 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:00:22.261303 | orchestrator | 2026-03-09 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:25.300091 | orchestrator | 2026-03-09 01:00:25 | INFO  | Task 52361909-04ce-486f-8388-4c98512c4a60 is in state STARTED 2026-03-09 01:00:25.300891 | orchestrator | 2026-03-09 01:00:25 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 01:00:25.302093 | orchestrator | 2026-03-09 01:00:25 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:00:25.302143 | orchestrator | 2026-03-09 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:28.340959 | orchestrator | 2026-03-09 01:00:28 | INFO  | Task 52361909-04ce-486f-8388-4c98512c4a60 is in state STARTED 2026-03-09 01:00:28.341058 | orchestrator | 2026-03-09 01:00:28 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 01:00:28.343886 | orchestrator | 2026-03-09 01:00:28 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:00:28.343974 | orchestrator | 2026-03-09 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:31.386289 | orchestrator | 2026-03-09 01:00:31 | INFO  | Task 52361909-04ce-486f-8388-4c98512c4a60 is in state STARTED 2026-03-09 01:00:31.387104 | orchestrator | 2026-03-09 01:00:31 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 01:00:31.388508 | orchestrator | 2026-03-09 01:00:31 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:00:31.388567 | orchestrator | 2026-03-09 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:34.437022 | orchestrator | 2026-03-09 01:00:34 | INFO  | Task 52361909-04ce-486f-8388-4c98512c4a60 is in state SUCCESS 2026-03-09 01:00:34.437604 | orchestrator | 2026-03-09 01:00:34 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 01:00:34.438632 | orchestrator | 2026-03-09 01:00:34 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:00:34.438799 | orchestrator | 2026-03-09 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:37.485305 | orchestrator | 2026-03-09 01:00:37 | INFO  | Task 6b9400cd-1665-4096-9d05-efd0d8f1df2c is in state STARTED 2026-03-09 01:00:37.491002 | orchestrator | 2026-03-09 01:00:37 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 01:00:37.492659 | orchestrator | 2026-03-09 01:00:37 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:00:37.492881 | orchestrator | 2026-03-09 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:40.539326 | orchestrator | 2026-03-09 01:00:40 | INFO  | Task 6b9400cd-1665-4096-9d05-efd0d8f1df2c is in state STARTED 2026-03-09 01:00:40.541425 | orchestrator | 2026-03-09 01:00:40 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state STARTED 2026-03-09 01:00:40.542876 | orchestrator | 2026-03-09 01:00:40 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:00:40.542967 | orchestrator | 2026-03-09 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:43.587507 | orchestrator | 2026-03-09 01:00:43 | INFO  | Task 6b9400cd-1665-4096-9d05-efd0d8f1df2c is in state STARTED 2026-03-09 01:00:43.592134 | orchestrator | 2026-03-09 01:00:43.592219 | orchestrator | 2026-03-09 01:00:43.592233 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-09 01:00:43.592245 | orchestrator | 2026-03-09 01:00:43.592256 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-09 01:00:43.592266 | orchestrator | Monday 09 March 2026 00:59:55 +0000 (0:00:00.160) 0:00:00.160 ********** 2026-03-09 01:00:43.592276 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-09 01:00:43.592288 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-09 01:00:43.592298 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-09 01:00:43.592308 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-09 01:00:43.592318 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-09 01:00:43.592328 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-09 01:00:43.592336 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-09 01:00:43.592344 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-09 01:00:43.592352 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-09 01:00:43.592360 | orchestrator | 2026-03-09 01:00:43.592368 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-09 01:00:43.592376 | orchestrator | Monday 09 March 2026 01:00:00 +0000 (0:00:04.888) 0:00:05.049 ********** 2026-03-09 01:00:43.592384 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-09 01:00:43.592392 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-09 01:00:43.592424 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-09 01:00:43.592432 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-09 01:00:43.592440 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-09 01:00:43.592448 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-09 01:00:43.592456 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-09 01:00:43.592464 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-09 01:00:43.592472 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-09 01:00:43.592479 | orchestrator | 2026-03-09 01:00:43.592487 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-09 01:00:43.592495 | orchestrator | Monday 09 March 2026 01:00:05 +0000 (0:00:04.732) 0:00:09.781 ********** 2026-03-09 01:00:43.592505 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-09 01:00:43.592513 | orchestrator | 2026-03-09 01:00:43.592521 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-09 01:00:43.592529 | orchestrator | Monday 09 March 2026 01:00:06 +0000 (0:00:01.156) 0:00:10.938 ********** 2026-03-09 01:00:43.592538 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-09 01:00:43.592546 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-09 01:00:43.592554 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-09 01:00:43.592562 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-09 01:00:43.592570 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-09 01:00:43.592578 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-09 01:00:43.592586 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-09 01:00:43.592594 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-09 01:00:43.592602 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-09 01:00:43.592609 | orchestrator | 2026-03-09 01:00:43.592617 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-09 01:00:43.592625 | orchestrator | Monday 09 March 2026 01:00:22 +0000 (0:00:16.028) 0:00:26.966 ********** 2026-03-09 01:00:43.592633 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-09 01:00:43.592642 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-09 01:00:43.592650 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-09 01:00:43.592689 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-09 01:00:43.593105 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-09 01:00:43.593129 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-09 01:00:43.593138 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-09 01:00:43.593146 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-09 01:00:43.593235 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-09 01:00:43.593251 | orchestrator | 2026-03-09 01:00:43.593260 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-09 01:00:43.593278 | orchestrator | Monday 09 March 2026 01:00:26 +0000 (0:00:03.329) 0:00:30.296 ********** 2026-03-09 01:00:43.593287 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-09 01:00:43.593295 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-09 01:00:43.593303 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-09 01:00:43.593312 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-09 01:00:43.593320 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-09 01:00:43.593328 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-09 01:00:43.593336 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-09 01:00:43.593344 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-09 01:00:43.593352 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-09 01:00:43.593360 | orchestrator | 2026-03-09 01:00:43.593368 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:00:43.593376 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:00:43.593386 | orchestrator | 2026-03-09 01:00:43.593394 | orchestrator | 2026-03-09 01:00:43.593402 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:00:43.593409 | orchestrator | Monday 09 March 2026 01:00:33 +0000 (0:00:07.676) 0:00:37.973 ********** 2026-03-09 01:00:43.593417 | orchestrator | =============================================================================== 2026-03-09 01:00:43.593425 | orchestrator | Write ceph keys to the share directory --------------------------------- 16.03s 2026-03-09 01:00:43.593433 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.68s 2026-03-09 01:00:43.593441 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.89s 2026-03-09 01:00:43.593449 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.73s 2026-03-09 01:00:43.593456 | orchestrator | Check if target directories exist --------------------------------------- 3.33s 2026-03-09 01:00:43.593464 | orchestrator | Create share directory -------------------------------------------------- 1.16s 2026-03-09 01:00:43.593472 | orchestrator | 2026-03-09 01:00:43.593480 | orchestrator | 2026-03-09 01:00:43.593488 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:00:43.593496 | orchestrator | 2026-03-09 01:00:43.593504 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:00:43.593512 | orchestrator | Monday 09 March 2026 00:58:48 +0000 (0:00:00.392) 0:00:00.392 ********** 2026-03-09 01:00:43.593520 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:00:43.593528 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:00:43.593536 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:00:43.593543 | orchestrator | 2026-03-09 01:00:43.593551 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:00:43.593559 | orchestrator | Monday 09 March 2026 00:58:49 +0000 (0:00:00.268) 0:00:00.660 ********** 2026-03-09 01:00:43.593568 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-09 01:00:43.593576 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-09 01:00:43.593585 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-09 01:00:43.593593 | orchestrator | 2026-03-09 01:00:43.593601 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-09 01:00:43.593609 | orchestrator | 2026-03-09 01:00:43.593617 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-09 01:00:43.593624 | orchestrator | Monday 09 March 2026 00:58:49 +0000 (0:00:00.427) 0:00:01.088 ********** 2026-03-09 01:00:43.593632 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:00:43.593646 | orchestrator | 2026-03-09 01:00:43.593654 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-09 01:00:43.593724 | orchestrator | Monday 09 March 2026 00:58:50 +0000 (0:00:00.508) 0:00:01.596 ********** 2026-03-09 01:00:43.593755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:00:43.593776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:00:43.593804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:00:43.593815 | orchestrator | 2026-03-09 01:00:43.593826 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-09 01:00:43.593835 | orchestrator | Monday 09 March 2026 00:58:51 +0000 (0:00:01.438) 0:00:03.035 ********** 2026-03-09 01:00:43.593845 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:00:43.593854 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:00:43.593864 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:00:43.593874 | orchestrator | 2026-03-09 01:00:43.593882 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-09 01:00:43.593890 | orchestrator | Monday 09 March 2026 00:58:52 +0000 (0:00:00.494) 0:00:03.529 ********** 2026-03-09 01:00:43.593898 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-09 01:00:43.593906 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-09 01:00:43.593914 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-09 01:00:43.593922 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-09 01:00:43.593930 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-09 01:00:43.593938 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-09 01:00:43.593952 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-09 01:00:43.593960 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-09 01:00:43.593968 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-09 01:00:43.593976 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-09 01:00:43.593984 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-09 01:00:43.593992 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-09 01:00:43.593999 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-09 01:00:43.594007 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-09 01:00:43.594086 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-09 01:00:43.594098 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-09 01:00:43.594106 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-09 01:00:43.594114 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-09 01:00:43.594122 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-09 01:00:43.594130 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-09 01:00:43.594138 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-09 01:00:43.594146 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-09 01:00:43.594160 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-09 01:00:43.594168 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-09 01:00:43.594188 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-09 01:00:43.594198 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-09 01:00:43.594215 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-09 01:00:43.594223 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-09 01:00:43.594231 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-09 01:00:43.594239 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-09 01:00:43.594247 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-09 01:00:43.594255 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-09 01:00:43.594263 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-09 01:00:43.594271 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-09 01:00:43.594279 | orchestrator | 2026-03-09 01:00:43.594288 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:00:43.594302 | orchestrator | Monday 09 March 2026 00:58:53 +0000 (0:00:00.882) 0:00:04.411 ********** 2026-03-09 01:00:43.594310 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:00:43.594318 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:00:43.594326 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:00:43.594334 | orchestrator | 2026-03-09 01:00:43.594347 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:00:43.594361 | orchestrator | Monday 09 March 2026 00:58:53 +0000 (0:00:00.396) 0:00:04.807 ********** 2026-03-09 01:00:43.594370 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:43.594378 | orchestrator | 2026-03-09 01:00:43.594386 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:00:43.594393 | orchestrator | Monday 09 March 2026 00:58:53 +0000 (0:00:00.140) 0:00:04.948 ********** 2026-03-09 01:00:43.594401 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:43.594409 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:00:43.594417 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:00:43.594425 | orchestrator | 2026-03-09 01:00:43.594433 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:00:43.594441 | orchestrator | Monday 09 March 2026 00:58:54 +0000 (0:00:00.555) 0:00:05.503 ********** 2026-03-09 01:00:43.594448 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:00:43.594456 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:00:43.594464 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:00:43.594472 | orchestrator | 2026-03-09 01:00:43.594480 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:00:43.594488 | orchestrator | Monday 09 March 2026 00:58:54 +0000 (0:00:00.441) 0:00:05.945 ********** 2026-03-09 01:00:43.594496 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:43.594504 | orchestrator | 2026-03-09 01:00:43.594512 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:00:43.594520 | orchestrator | Monday 09 March 2026 00:58:54 +0000 (0:00:00.138) 0:00:06.083 ********** 2026-03-09 01:00:43.594527 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:43.594536 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:00:43.594543 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:00:43.594551 | orchestrator | 2026-03-09 01:00:43.594559 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:00:43.594567 | orchestrator | Monday 09 March 2026 00:58:55 +0000 (0:00:00.331) 0:00:06.415 ********** 2026-03-09 01:00:43.594579 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:00:43.594587 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:00:43.594595 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:00:43.594602 | orchestrator | 2026-03-09 01:00:43.594610 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:00:43.594618 | orchestrator | Monday 09 March 2026 00:58:55 +0000 (0:00:00.548) 0:00:06.963 ********** 2026-03-09 01:00:43.594626 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:43.594634 | orchestrator | 2026-03-09 01:00:43.594642 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:00:43.594650 | orchestrator | Monday 09 March 2026 00:58:55 +0000 (0:00:00.382) 0:00:07.346 ********** 2026-03-09 01:00:43.594677 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:43.594685 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:00:43.594693 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:00:43.594701 | orchestrator | 2026-03-09 01:00:43.594709 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:00:43.594722 | orchestrator | Monday 09 March 2026 00:58:56 +0000 (0:00:00.332) 0:00:07.678 ********** 2026-03-09 01:00:43.594730 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:00:43.594738 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:00:43.594746 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:00:43.594754 | orchestrator | 2026-03-09 01:00:43.594762 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:00:43.594775 | orchestrator | Monday 09 March 2026 00:58:56 +0000 (0:00:00.353) 0:00:08.032 ********** 2026-03-09 01:00:43.594783 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:43.594791 | orchestrator | 2026-03-09 01:00:43.594799 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:00:43.594807 | orchestrator | Monday 09 March 2026 00:58:56 +0000 (0:00:00.133) 0:00:08.166 ********** 2026-03-09 01:00:43.594815 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:43.594823 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:00:43.594831 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:00:43.594839 | orchestrator | 2026-03-09 01:00:43.594846 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:00:43.594854 | orchestrator | Monday 09 March 2026 00:58:57 +0000 (0:00:00.316) 0:00:08.482 ********** 2026-03-09 01:00:43.594862 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:00:43.594870 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:00:43.594878 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:00:43.594886 | orchestrator | 2026-03-09 01:00:43.594894 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:00:43.594902 | orchestrator | Monday 09 March 2026 00:58:57 +0000 (0:00:00.533) 0:00:09.016 ********** 2026-03-09 01:00:43.594910 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:43.594917 | orchestrator | 2026-03-09 01:00:43.594925 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:00:43.594933 | orchestrator | Monday 09 March 2026 00:58:57 +0000 (0:00:00.144) 0:00:09.161 ********** 2026-03-09 01:00:43.594941 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:43.594950 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:00:43.594957 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:00:43.594966 | orchestrator | 2026-03-09 01:00:43.594973 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:00:43.594981 | orchestrator | Monday 09 March 2026 00:58:58 +0000 (0:00:00.330) 0:00:09.492 ********** 2026-03-09 01:00:43.594989 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:00:43.594997 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:00:43.595005 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:00:43.595013 | orchestrator | 2026-03-09 01:00:43.595021 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:00:43.595029 | orchestrator | Monday 09 March 2026 00:58:58 +0000 (0:00:00.342) 0:00:09.834 ********** 2026-03-09 01:00:43.595037 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:43.595045 | orchestrator | 2026-03-09 01:00:43.595053 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:00:43.595061 | orchestrator | Monday 09 March 2026 00:58:58 +0000 (0:00:00.147) 0:00:09.981 ********** 2026-03-09 01:00:43.595069 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:43.595077 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:00:43.595085 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:00:43.595092 | orchestrator | 2026-03-09 01:00:43.595100 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:00:43.595108 | orchestrator | Monday 09 March 2026 00:58:58 +0000 (0:00:00.312) 0:00:10.294 ********** 2026-03-09 01:00:43.595116 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:00:43.595124 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:00:43.595132 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:00:43.595140 | orchestrator | 2026-03-09 01:00:43.595148 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:00:43.595156 | orchestrator | Monday 09 March 2026 00:58:59 +0000 (0:00:00.618) 0:00:10.912 ********** 2026-03-09 01:00:43.595164 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:43.595172 | orchestrator | 2026-03-09 01:00:43.595180 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:00:43.595188 | orchestrator | Monday 09 March 2026 00:58:59 +0000 (0:00:00.190) 0:00:11.103 ********** 2026-03-09 01:00:43.595196 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:43.595209 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:00:43.595216 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:00:43.595224 | orchestrator | 2026-03-09 01:00:43.595233 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:00:43.595241 | orchestrator | Monday 09 March 2026 00:59:00 +0000 (0:00:00.325) 0:00:11.429 ********** 2026-03-09 01:00:43.595249 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:00:43.595257 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:00:43.595264 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:00:43.595272 | orchestrator | 2026-03-09 01:00:43.595280 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:00:43.595288 | orchestrator | Monday 09 March 2026 00:59:00 +0000 (0:00:00.348) 0:00:11.777 ********** 2026-03-09 01:00:43.595296 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:43.595304 | orchestrator | 2026-03-09 01:00:43.595312 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:00:43.595324 | orchestrator | Monday 09 March 2026 00:59:00 +0000 (0:00:00.191) 0:00:11.968 ********** 2026-03-09 01:00:43.595332 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:43.595340 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:00:43.595348 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:00:43.595356 | orchestrator | 2026-03-09 01:00:43.595364 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:00:43.595372 | orchestrator | Monday 09 March 2026 00:59:01 +0000 (0:00:00.576) 0:00:12.544 ********** 2026-03-09 01:00:43.595380 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:00:43.595388 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:00:43.595396 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:00:43.595403 | orchestrator | 2026-03-09 01:00:43.595412 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:00:43.595420 | orchestrator | Monday 09 March 2026 00:59:01 +0000 (0:00:00.392) 0:00:12.937 ********** 2026-03-09 01:00:43.595428 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:43.595436 | orchestrator | 2026-03-09 01:00:43.595449 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:00:43.595457 | orchestrator | Monday 09 March 2026 00:59:01 +0000 (0:00:00.134) 0:00:13.071 ********** 2026-03-09 01:00:43.595465 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:43.595473 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:00:43.595481 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:00:43.595489 | orchestrator | 2026-03-09 01:00:43.595497 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-09 01:00:43.595505 | orchestrator | Monday 09 March 2026 00:59:02 +0000 (0:00:00.350) 0:00:13.422 ********** 2026-03-09 01:00:43.595513 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:00:43.595521 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:00:43.595529 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:00:43.595537 | orchestrator | 2026-03-09 01:00:43.595545 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-09 01:00:43.595553 | orchestrator | Monday 09 March 2026 00:59:02 +0000 (0:00:00.373) 0:00:13.795 ********** 2026-03-09 01:00:43.595561 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:43.595569 | orchestrator | 2026-03-09 01:00:43.595577 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-09 01:00:43.595585 | orchestrator | Monday 09 March 2026 00:59:02 +0000 (0:00:00.135) 0:00:13.931 ********** 2026-03-09 01:00:43.595593 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:43.595601 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:00:43.595609 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:00:43.595617 | orchestrator | 2026-03-09 01:00:43.595625 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-09 01:00:43.595633 | orchestrator | Monday 09 March 2026 00:59:03 +0000 (0:00:00.532) 0:00:14.463 ********** 2026-03-09 01:00:43.595641 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:00:43.595649 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:00:43.595675 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:00:43.595683 | orchestrator | 2026-03-09 01:00:43.595691 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-09 01:00:43.595699 | orchestrator | Monday 09 March 2026 00:59:04 +0000 (0:00:01.644) 0:00:16.108 ********** 2026-03-09 01:00:43.595707 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-09 01:00:43.595715 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-09 01:00:43.595724 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-09 01:00:43.595732 | orchestrator | 2026-03-09 01:00:43.595740 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-09 01:00:43.595747 | orchestrator | Monday 09 March 2026 00:59:06 +0000 (0:00:01.953) 0:00:18.061 ********** 2026-03-09 01:00:43.595755 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-09 01:00:43.595764 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-09 01:00:43.595772 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-09 01:00:43.595780 | orchestrator | 2026-03-09 01:00:43.595788 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-09 01:00:43.595796 | orchestrator | Monday 09 March 2026 00:59:09 +0000 (0:00:02.536) 0:00:20.598 ********** 2026-03-09 01:00:43.595804 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-09 01:00:43.595812 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-09 01:00:43.595820 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-09 01:00:43.595828 | orchestrator | 2026-03-09 01:00:43.595836 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-09 01:00:43.595844 | orchestrator | Monday 09 March 2026 00:59:11 +0000 (0:00:02.136) 0:00:22.734 ********** 2026-03-09 01:00:43.595852 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:43.595860 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:00:43.595868 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:00:43.595876 | orchestrator | 2026-03-09 01:00:43.595884 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-09 01:00:43.595892 | orchestrator | Monday 09 March 2026 00:59:11 +0000 (0:00:00.337) 0:00:23.072 ********** 2026-03-09 01:00:43.595899 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:43.595908 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:00:43.595916 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:00:43.595924 | orchestrator | 2026-03-09 01:00:43.595932 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-09 01:00:43.595940 | orchestrator | Monday 09 March 2026 00:59:11 +0000 (0:00:00.310) 0:00:23.383 ********** 2026-03-09 01:00:43.595955 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:00:43.595963 | orchestrator | 2026-03-09 01:00:43.595971 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-09 01:00:43.595979 | orchestrator | Monday 09 March 2026 00:59:12 +0000 (0:00:00.812) 0:00:24.195 ********** 2026-03-09 01:00:43.595996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:00:43.596017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:00:43.596034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BL2026-03-09 01:00:43 | INFO  | Task 04486514-12b9-4e6a-a26f-0d62ce56d3e2 is in state SUCCESS 2026-03-09 01:00:43.596049 | orchestrator | AZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:00:43.596058 | orchestrator | 2026-03-09 01:00:43.596067 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-09 01:00:43.596075 | orchestrator | Monday 09 March 2026 00:59:14 +0000 (0:00:01.663) 0:00:25.859 ********** 2026-03-09 01:00:43.596094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 01:00:43.596108 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:43.596117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 01:00:43.596126 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:00:43.596145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 01:00:43.596159 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:00:43.596167 | orchestrator | 2026-03-09 01:00:43.596175 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-09 01:00:43.596183 | orchestrator | Monday 09 March 2026 00:59:15 +0000 (0:00:00.733) 0:00:26.593 ********** 2026-03-09 01:00:43.596192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 01:00:43.596201 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:43.596220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 01:00:43.596235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-09 01:00:43.596244 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:00:43.596252 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:00:43.596260 | orchestrator | 2026-03-09 01:00:43.596268 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-09 01:00:43.596279 | orchestrator | Monday 09 March 2026 00:59:16 +0000 (0:00:00.972) 0:00:27.566 ********** 2026-03-09 01:00:43.596303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:00:43.596318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:00:43.596340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-09 01:00:43.596350 | orchestrator | 2026-03-09 01:00:43.596358 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-09 01:00:43.596366 | orchestrator | Monday 09 March 2026 00:59:17 +0000 (0:00:01.572) 0:00:29.138 ********** 2026-03-09 01:00:43.596374 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:00:43.596382 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:00:43.596390 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:00:43.596398 | orchestrator | 2026-03-09 01:00:43.596405 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-09 01:00:43.596414 | orchestrator | Monday 09 March 2026 00:59:18 +0000 (0:00:00.378) 0:00:29.517 ********** 2026-03-09 01:00:43.596421 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:00:43.596429 | orchestrator | 2026-03-09 01:00:43.596437 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-09 01:00:43.596445 | orchestrator | Monday 09 March 2026 00:59:18 +0000 (0:00:00.642) 0:00:30.159 ********** 2026-03-09 01:00:43.596453 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:00:43.596461 | orchestrator | 2026-03-09 01:00:43.596469 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-09 01:00:43.596477 | orchestrator | Monday 09 March 2026 00:59:21 +0000 (0:00:02.599) 0:00:32.758 ********** 2026-03-09 01:00:43.596485 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:00:43.596493 | orchestrator | 2026-03-09 01:00:43.596501 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-09 01:00:43.596509 | orchestrator | Monday 09 March 2026 00:59:24 +0000 (0:00:03.249) 0:00:36.008 ********** 2026-03-09 01:00:43.596522 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:00:43.596530 | orchestrator | 2026-03-09 01:00:43.596539 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-09 01:00:43.596547 | orchestrator | Monday 09 March 2026 00:59:42 +0000 (0:00:17.839) 0:00:53.848 ********** 2026-03-09 01:00:43.596554 | orchestrator | 2026-03-09 01:00:43.596562 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-09 01:00:43.596570 | orchestrator | Monday 09 March 2026 00:59:42 +0000 (0:00:00.115) 0:00:53.963 ********** 2026-03-09 01:00:43.596578 | orchestrator | 2026-03-09 01:00:43.596586 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-09 01:00:43.596594 | orchestrator | Monday 09 March 2026 00:59:42 +0000 (0:00:00.079) 0:00:54.042 ********** 2026-03-09 01:00:43.596602 | orchestrator | 2026-03-09 01:00:43.596616 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-09 01:00:43.596624 | orchestrator | Monday 09 March 2026 00:59:42 +0000 (0:00:00.083) 0:00:54.126 ********** 2026-03-09 01:00:43.596632 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:00:43.596640 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:00:43.596648 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:00:43.596678 | orchestrator | 2026-03-09 01:00:43.596687 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:00:43.596695 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-09 01:00:43.596703 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-09 01:00:43.596717 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-09 01:00:43.596725 | orchestrator | 2026-03-09 01:00:43.596733 | orchestrator | 2026-03-09 01:00:43.596741 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:00:43.596749 | orchestrator | Monday 09 March 2026 01:00:41 +0000 (0:00:58.694) 0:01:52.820 ********** 2026-03-09 01:00:43.596757 | orchestrator | =============================================================================== 2026-03-09 01:00:43.596765 | orchestrator | horizon : Restart horizon container ------------------------------------ 58.69s 2026-03-09 01:00:43.596773 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.84s 2026-03-09 01:00:43.596781 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 3.25s 2026-03-09 01:00:43.596788 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.60s 2026-03-09 01:00:43.596796 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.54s 2026-03-09 01:00:43.596804 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.14s 2026-03-09 01:00:43.596812 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.95s 2026-03-09 01:00:43.596820 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.66s 2026-03-09 01:00:43.596828 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.65s 2026-03-09 01:00:43.596836 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.57s 2026-03-09 01:00:43.596843 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.44s 2026-03-09 01:00:43.596851 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.97s 2026-03-09 01:00:43.596859 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.88s 2026-03-09 01:00:43.596867 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.81s 2026-03-09 01:00:43.596875 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.73s 2026-03-09 01:00:43.596888 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.64s 2026-03-09 01:00:43.596896 | orchestrator | horizon : Update policy file name --------------------------------------- 0.62s 2026-03-09 01:00:43.596904 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.58s 2026-03-09 01:00:43.596912 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.56s 2026-03-09 01:00:43.596920 | orchestrator | horizon : Update policy file name --------------------------------------- 0.55s 2026-03-09 01:00:43.596928 | orchestrator | 2026-03-09 01:00:43 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:00:43.596936 | orchestrator | 2026-03-09 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:46.630868 | orchestrator | 2026-03-09 01:00:46 | INFO  | Task 6b9400cd-1665-4096-9d05-efd0d8f1df2c is in state STARTED 2026-03-09 01:00:46.632957 | orchestrator | 2026-03-09 01:00:46 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:00:46.633056 | orchestrator | 2026-03-09 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:49.693035 | orchestrator | 2026-03-09 01:00:49 | INFO  | Task 6b9400cd-1665-4096-9d05-efd0d8f1df2c is in state STARTED 2026-03-09 01:00:49.694409 | orchestrator | 2026-03-09 01:00:49 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:00:49.694447 | orchestrator | 2026-03-09 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:52.750370 | orchestrator | 2026-03-09 01:00:52 | INFO  | Task 6b9400cd-1665-4096-9d05-efd0d8f1df2c is in state STARTED 2026-03-09 01:00:52.753531 | orchestrator | 2026-03-09 01:00:52 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:00:52.753606 | orchestrator | 2026-03-09 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:55.797228 | orchestrator | 2026-03-09 01:00:55 | INFO  | Task 6b9400cd-1665-4096-9d05-efd0d8f1df2c is in state STARTED 2026-03-09 01:00:55.800212 | orchestrator | 2026-03-09 01:00:55 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:00:55.800314 | orchestrator | 2026-03-09 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:00:58.840102 | orchestrator | 2026-03-09 01:00:58 | INFO  | Task 6b9400cd-1665-4096-9d05-efd0d8f1df2c is in state STARTED 2026-03-09 01:00:58.840621 | orchestrator | 2026-03-09 01:00:58 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:00:58.840863 | orchestrator | 2026-03-09 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:01.881803 | orchestrator | 2026-03-09 01:01:01 | INFO  | Task 6b9400cd-1665-4096-9d05-efd0d8f1df2c is in state STARTED 2026-03-09 01:01:01.885727 | orchestrator | 2026-03-09 01:01:01 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:01:01.885800 | orchestrator | 2026-03-09 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:04.917901 | orchestrator | 2026-03-09 01:01:04 | INFO  | Task 6b9400cd-1665-4096-9d05-efd0d8f1df2c is in state STARTED 2026-03-09 01:01:04.918270 | orchestrator | 2026-03-09 01:01:04 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:01:04.918770 | orchestrator | 2026-03-09 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:07.956211 | orchestrator | 2026-03-09 01:01:07 | INFO  | Task 6b9400cd-1665-4096-9d05-efd0d8f1df2c is in state STARTED 2026-03-09 01:01:07.959181 | orchestrator | 2026-03-09 01:01:07 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:01:07.959871 | orchestrator | 2026-03-09 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:11.007885 | orchestrator | 2026-03-09 01:01:11 | INFO  | Task 6b9400cd-1665-4096-9d05-efd0d8f1df2c is in state STARTED 2026-03-09 01:01:11.009525 | orchestrator | 2026-03-09 01:01:11 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:01:11.009556 | orchestrator | 2026-03-09 01:01:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:14.053530 | orchestrator | 2026-03-09 01:01:14 | INFO  | Task 6b9400cd-1665-4096-9d05-efd0d8f1df2c is in state STARTED 2026-03-09 01:01:14.053628 | orchestrator | 2026-03-09 01:01:14 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:01:14.053642 | orchestrator | 2026-03-09 01:01:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:17.089860 | orchestrator | 2026-03-09 01:01:17 | INFO  | Task 6b9400cd-1665-4096-9d05-efd0d8f1df2c is in state STARTED 2026-03-09 01:01:17.090912 | orchestrator | 2026-03-09 01:01:17 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:01:17.090954 | orchestrator | 2026-03-09 01:01:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:20.139353 | orchestrator | 2026-03-09 01:01:20 | INFO  | Task 6b9400cd-1665-4096-9d05-efd0d8f1df2c is in state STARTED 2026-03-09 01:01:20.141548 | orchestrator | 2026-03-09 01:01:20 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:01:20.141634 | orchestrator | 2026-03-09 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:23.182459 | orchestrator | 2026-03-09 01:01:23 | INFO  | Task 6b9400cd-1665-4096-9d05-efd0d8f1df2c is in state STARTED 2026-03-09 01:01:23.184055 | orchestrator | 2026-03-09 01:01:23 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:01:23.184099 | orchestrator | 2026-03-09 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:26.230569 | orchestrator | 2026-03-09 01:01:26 | INFO  | Task 6b9400cd-1665-4096-9d05-efd0d8f1df2c is in state STARTED 2026-03-09 01:01:26.232523 | orchestrator | 2026-03-09 01:01:26 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:01:26.232598 | orchestrator | 2026-03-09 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:29.280049 | orchestrator | 2026-03-09 01:01:29 | INFO  | Task 6b9400cd-1665-4096-9d05-efd0d8f1df2c is in state STARTED 2026-03-09 01:01:29.281599 | orchestrator | 2026-03-09 01:01:29 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:01:29.281651 | orchestrator | 2026-03-09 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:32.328094 | orchestrator | 2026-03-09 01:01:32 | INFO  | Task 6b9400cd-1665-4096-9d05-efd0d8f1df2c is in state STARTED 2026-03-09 01:01:32.328914 | orchestrator | 2026-03-09 01:01:32 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state STARTED 2026-03-09 01:01:32.328954 | orchestrator | 2026-03-09 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:35.379617 | orchestrator | 2026-03-09 01:01:35 | INFO  | Task 6b9400cd-1665-4096-9d05-efd0d8f1df2c is in state STARTED 2026-03-09 01:01:35.381935 | orchestrator | 2026-03-09 01:01:35 | INFO  | Task 02f6f1c2-9d67-474d-b23d-2fdc31a8e625 is in state SUCCESS 2026-03-09 01:01:35.384911 | orchestrator | 2026-03-09 01:01:35.385072 | orchestrator | 2026-03-09 01:01:35.385096 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:01:35.385113 | orchestrator | 2026-03-09 01:01:35.385127 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:01:35.385172 | orchestrator | Monday 09 March 2026 00:58:48 +0000 (0:00:00.265) 0:00:00.265 ********** 2026-03-09 01:01:35.385190 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:35.385207 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:35.385222 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:35.385238 | orchestrator | 2026-03-09 01:01:35.385252 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:01:35.385267 | orchestrator | Monday 09 March 2026 00:58:49 +0000 (0:00:00.294) 0:00:00.559 ********** 2026-03-09 01:01:35.385284 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-09 01:01:35.385300 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-09 01:01:35.385316 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-09 01:01:35.385330 | orchestrator | 2026-03-09 01:01:35.385344 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-09 01:01:35.385358 | orchestrator | 2026-03-09 01:01:35.385373 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-09 01:01:35.385387 | orchestrator | Monday 09 March 2026 00:58:49 +0000 (0:00:00.404) 0:00:00.964 ********** 2026-03-09 01:01:35.385402 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:35.385418 | orchestrator | 2026-03-09 01:01:35.385433 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-09 01:01:35.385449 | orchestrator | Monday 09 March 2026 00:58:50 +0000 (0:00:00.543) 0:00:01.507 ********** 2026-03-09 01:01:35.385471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:01:35.385492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:01:35.385584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:01:35.385615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:01:35.385632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:01:35.385646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:01:35.385661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:01:35.385677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:01:35.385691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:01:35.385716 | orchestrator | 2026-03-09 01:01:35.385768 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-09 01:01:35.385782 | orchestrator | Monday 09 March 2026 00:58:52 +0000 (0:00:02.011) 0:00:03.519 ********** 2026-03-09 01:01:35.385797 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:35.385812 | orchestrator | 2026-03-09 01:01:35.385832 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-09 01:01:35.385846 | orchestrator | Monday 09 March 2026 00:58:52 +0000 (0:00:00.138) 0:00:03.658 ********** 2026-03-09 01:01:35.385859 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:35.385871 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:35.385885 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:35.385898 | orchestrator | 2026-03-09 01:01:35.385911 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-09 01:01:35.385925 | orchestrator | Monday 09 March 2026 00:58:52 +0000 (0:00:00.610) 0:00:04.268 ********** 2026-03-09 01:01:35.385939 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:01:35.385952 | orchestrator | 2026-03-09 01:01:35.385966 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-09 01:01:35.385979 | orchestrator | Monday 09 March 2026 00:58:54 +0000 (0:00:01.126) 0:00:05.394 ********** 2026-03-09 01:01:35.385993 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:35.386006 | orchestrator | 2026-03-09 01:01:35.386073 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-09 01:01:35.386091 | orchestrator | Monday 09 March 2026 00:58:54 +0000 (0:00:00.681) 0:00:06.076 ********** 2026-03-09 01:01:35.386107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:01:35.386123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:01:35.386162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:01:35.386191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:01:35.386206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:01:35.386221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:01:35.386236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:01:35.386250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:01:35.386272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:01:35.386286 | orchestrator | 2026-03-09 01:01:35.386300 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-09 01:01:35.386315 | orchestrator | Monday 09 March 2026 00:58:58 +0000 (0:00:03.808) 0:00:09.885 ********** 2026-03-09 01:01:35.386343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-09 01:01:35.386358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:01:35.386371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:01:35.386385 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:35.386402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-09 01:01:35.386427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:01:35.386447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:01:35.386461 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:35.386485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-09 01:01:35.386500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:01:35.386514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:01:35.386537 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:35.386550 | orchestrator | 2026-03-09 01:01:35.386564 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-09 01:01:35.386578 | orchestrator | Monday 09 March 2026 00:58:59 +0000 (0:00:00.634) 0:00:10.520 ********** 2026-03-09 01:01:35.386593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-09 01:01:35.386613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:01:35.386635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:01:35.386649 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:35.386664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-09 01:01:35.386678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:01:35.386868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:01:35.386891 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:35.386914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-09 01:01:35.386941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:01:35.387090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:01:35.387102 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:35.387113 | orchestrator | 2026-03-09 01:01:35.387125 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-09 01:01:35.387137 | orchestrator | Monday 09 March 2026 00:59:00 +0000 (0:00:00.933) 0:00:11.453 ********** 2026-03-09 01:01:35.387149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:01:35.387172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:01:35.387202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:01:35.387215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:01:35.387227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:01:35.387239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:01:35.387258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:01:35.387269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:01:35.387282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:01:35.387294 | orchestrator | 2026-03-09 01:01:35.387310 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-09 01:01:35.387323 | orchestrator | Monday 09 March 2026 00:59:03 +0000 (0:00:03.638) 0:00:15.092 ********** 2026-03-09 01:01:35.387342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:01:35.387353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:01:35.387372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:01:35.387384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:01:35.387413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:01:35.387427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:01:35.387440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:01:35.387459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:01:35.387471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:01:35.387483 | orchestrator | 2026-03-09 01:01:35.387494 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-09 01:01:35.387507 | orchestrator | Monday 09 March 2026 00:59:09 +0000 (0:00:06.153) 0:00:21.246 ********** 2026-03-09 01:01:35.387518 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:35.387530 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:35.387541 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:35.387553 | orchestrator | 2026-03-09 01:01:35.387564 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-09 01:01:35.387575 | orchestrator | Monday 09 March 2026 00:59:11 +0000 (0:00:01.528) 0:00:22.775 ********** 2026-03-09 01:01:35.387587 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:35.387598 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:35.387610 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:35.387621 | orchestrator | 2026-03-09 01:01:35.387634 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-09 01:01:35.387646 | orchestrator | Monday 09 March 2026 00:59:11 +0000 (0:00:00.559) 0:00:23.334 ********** 2026-03-09 01:01:35.387657 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:35.387668 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:35.387679 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:35.387692 | orchestrator | 2026-03-09 01:01:35.387703 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-09 01:01:35.387714 | orchestrator | Monday 09 March 2026 00:59:12 +0000 (0:00:00.326) 0:00:23.661 ********** 2026-03-09 01:01:35.387788 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:35.387801 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:35.387812 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:35.387824 | orchestrator | 2026-03-09 01:01:35.387836 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-09 01:01:35.387848 | orchestrator | Monday 09 March 2026 00:59:12 +0000 (0:00:00.571) 0:00:24.233 ********** 2026-03-09 01:01:35.387877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-09 01:01:35.387898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:01:35.387911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:01:35.387923 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:35.387935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-09 01:01:35.387948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:01:35.387970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:01:35.387989 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:35.388002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-09 01:01:35.388014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-09 01:01:35.388026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-09 01:01:35.388039 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:35.388052 | orchestrator | 2026-03-09 01:01:35.388063 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-09 01:01:35.388075 | orchestrator | Monday 09 March 2026 00:59:13 +0000 (0:00:00.946) 0:00:25.180 ********** 2026-03-09 01:01:35.388086 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:35.388098 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:35.388110 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:35.388121 | orchestrator | 2026-03-09 01:01:35.388132 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-09 01:01:35.388143 | orchestrator | Monday 09 March 2026 00:59:14 +0000 (0:00:00.344) 0:00:25.524 ********** 2026-03-09 01:01:35.388154 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-09 01:01:35.388166 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-09 01:01:35.388177 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-09 01:01:35.388189 | orchestrator | 2026-03-09 01:01:35.388201 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-09 01:01:35.388212 | orchestrator | Monday 09 March 2026 00:59:16 +0000 (0:00:01.848) 0:00:27.373 ********** 2026-03-09 01:01:35.388225 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:01:35.388239 | orchestrator | 2026-03-09 01:01:35.388252 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-09 01:01:35.388269 | orchestrator | Monday 09 March 2026 00:59:17 +0000 (0:00:01.302) 0:00:28.675 ********** 2026-03-09 01:01:35.388281 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:35.388293 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:35.388304 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:35.388314 | orchestrator | 2026-03-09 01:01:35.388326 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-09 01:01:35.388336 | orchestrator | Monday 09 March 2026 00:59:18 +0000 (0:00:00.911) 0:00:29.586 ********** 2026-03-09 01:01:35.388352 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:01:35.388363 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-09 01:01:35.388374 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-09 01:01:35.388385 | orchestrator | 2026-03-09 01:01:35.388397 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-09 01:01:35.388416 | orchestrator | Monday 09 March 2026 00:59:19 +0000 (0:00:01.168) 0:00:30.755 ********** 2026-03-09 01:01:35.388429 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:35.388441 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:35.388452 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:35.388463 | orchestrator | 2026-03-09 01:01:35.388474 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-09 01:01:35.388485 | orchestrator | Monday 09 March 2026 00:59:19 +0000 (0:00:00.338) 0:00:31.094 ********** 2026-03-09 01:01:35.388497 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-09 01:01:35.388508 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-09 01:01:35.388520 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-09 01:01:35.388531 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-09 01:01:35.388543 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-09 01:01:35.388555 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-09 01:01:35.388566 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-09 01:01:35.388579 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-09 01:01:35.388590 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-09 01:01:35.388602 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-09 01:01:35.388614 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-09 01:01:35.388626 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-09 01:01:35.388638 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-09 01:01:35.388650 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-09 01:01:35.388662 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-09 01:01:35.388674 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-09 01:01:35.388686 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-09 01:01:35.388697 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-09 01:01:35.388708 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-09 01:01:35.388719 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-09 01:01:35.388758 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-09 01:01:35.388781 | orchestrator | 2026-03-09 01:01:35.388793 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-09 01:01:35.388804 | orchestrator | Monday 09 March 2026 00:59:29 +0000 (0:00:09.332) 0:00:40.427 ********** 2026-03-09 01:01:35.388816 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-09 01:01:35.388827 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-09 01:01:35.388838 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-09 01:01:35.388850 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-09 01:01:35.388861 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-09 01:01:35.388872 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-09 01:01:35.388883 | orchestrator | 2026-03-09 01:01:35.388895 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-09 01:01:35.388907 | orchestrator | Monday 09 March 2026 00:59:32 +0000 (0:00:03.038) 0:00:43.466 ********** 2026-03-09 01:01:35.388934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:01:35.388949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:01:35.388962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-09 01:01:35.388997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:01:35.389010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:01:35.389028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-09 01:01:35.389049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:01:35.389061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:01:35.389073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-09 01:01:35.389092 | orchestrator | 2026-03-09 01:01:35.389103 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-09 01:01:35.389114 | orchestrator | Monday 09 March 2026 00:59:34 +0000 (0:00:02.431) 0:00:45.897 ********** 2026-03-09 01:01:35.389126 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:35.389138 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:35.389149 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:35.389161 | orchestrator | 2026-03-09 01:01:35.389172 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-09 01:01:35.389183 | orchestrator | Monday 09 March 2026 00:59:34 +0000 (0:00:00.325) 0:00:46.223 ********** 2026-03-09 01:01:35.389195 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:35.389207 | orchestrator | 2026-03-09 01:01:35.389219 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-09 01:01:35.389230 | orchestrator | Monday 09 March 2026 00:59:37 +0000 (0:00:02.459) 0:00:48.682 ********** 2026-03-09 01:01:35.389242 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:35.389253 | orchestrator | 2026-03-09 01:01:35.389264 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-09 01:01:35.389275 | orchestrator | Monday 09 March 2026 00:59:39 +0000 (0:00:02.492) 0:00:51.175 ********** 2026-03-09 01:01:35.389287 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:35.389298 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:35.389309 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:35.389320 | orchestrator | 2026-03-09 01:01:35.389331 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-09 01:01:35.389341 | orchestrator | Monday 09 March 2026 00:59:40 +0000 (0:00:01.164) 0:00:52.339 ********** 2026-03-09 01:01:35.389352 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:35.389362 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:35.389374 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:35.389385 | orchestrator | 2026-03-09 01:01:35.389397 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-09 01:01:35.389409 | orchestrator | Monday 09 March 2026 00:59:41 +0000 (0:00:00.343) 0:00:52.682 ********** 2026-03-09 01:01:35.389420 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:35.389431 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:35.389442 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:35.389452 | orchestrator | 2026-03-09 01:01:35.389464 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-09 01:01:35.389476 | orchestrator | Monday 09 March 2026 00:59:41 +0000 (0:00:00.397) 0:00:53.080 ********** 2026-03-09 01:01:35.389487 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:35.389499 | orchestrator | 2026-03-09 01:01:35.389511 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-09 01:01:35.389523 | orchestrator | Monday 09 March 2026 00:59:56 +0000 (0:00:15.228) 0:01:08.309 ********** 2026-03-09 01:01:35.389535 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:35.389547 | orchestrator | 2026-03-09 01:01:35.389560 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-09 01:01:35.389571 | orchestrator | Monday 09 March 2026 01:00:08 +0000 (0:00:11.976) 0:01:20.285 ********** 2026-03-09 01:01:35.389583 | orchestrator | 2026-03-09 01:01:35.389600 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-09 01:01:35.389612 | orchestrator | Monday 09 March 2026 01:00:08 +0000 (0:00:00.080) 0:01:20.365 ********** 2026-03-09 01:01:35.389622 | orchestrator | 2026-03-09 01:01:35.389634 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-09 01:01:35.389653 | orchestrator | Monday 09 March 2026 01:00:09 +0000 (0:00:00.080) 0:01:20.446 ********** 2026-03-09 01:01:35.389665 | orchestrator | 2026-03-09 01:01:35.389676 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-09 01:01:35.389687 | orchestrator | Monday 09 March 2026 01:00:09 +0000 (0:00:00.085) 0:01:20.531 ********** 2026-03-09 01:01:35.389706 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:35.389719 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:35.389757 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:35.389769 | orchestrator | 2026-03-09 01:01:35.389781 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-09 01:01:35.389793 | orchestrator | Monday 09 March 2026 01:00:27 +0000 (0:00:18.782) 0:01:39.314 ********** 2026-03-09 01:01:35.389805 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:35.389817 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:35.389828 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:35.389839 | orchestrator | 2026-03-09 01:01:35.389849 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-09 01:01:35.389861 | orchestrator | Monday 09 March 2026 01:00:33 +0000 (0:00:05.141) 0:01:44.456 ********** 2026-03-09 01:01:35.389873 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:01:35.389885 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:01:35.389898 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:35.389909 | orchestrator | 2026-03-09 01:01:35.389921 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-09 01:01:35.389933 | orchestrator | Monday 09 March 2026 01:00:41 +0000 (0:00:07.999) 0:01:52.455 ********** 2026-03-09 01:01:35.389946 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:01:35.389957 | orchestrator | 2026-03-09 01:01:35.389968 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-09 01:01:35.389980 | orchestrator | Monday 09 March 2026 01:00:42 +0000 (0:00:00.920) 0:01:53.375 ********** 2026-03-09 01:01:35.389991 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:01:35.390004 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:35.390015 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:01:35.390073 | orchestrator | 2026-03-09 01:01:35.390085 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-09 01:01:35.390097 | orchestrator | Monday 09 March 2026 01:00:42 +0000 (0:00:00.817) 0:01:54.193 ********** 2026-03-09 01:01:35.390109 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:01:35.390121 | orchestrator | 2026-03-09 01:01:35.390132 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-09 01:01:35.390143 | orchestrator | Monday 09 March 2026 01:00:44 +0000 (0:00:01.939) 0:01:56.133 ********** 2026-03-09 01:01:35.390155 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-09 01:01:35.390166 | orchestrator | 2026-03-09 01:01:35.390178 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-09 01:01:35.390190 | orchestrator | Monday 09 March 2026 01:00:57 +0000 (0:00:12.547) 0:02:08.680 ********** 2026-03-09 01:01:35.390202 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-09 01:01:35.390215 | orchestrator | 2026-03-09 01:01:35.390226 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-09 01:01:35.390237 | orchestrator | Monday 09 March 2026 01:01:22 +0000 (0:00:25.126) 0:02:33.807 ********** 2026-03-09 01:01:35.390248 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-09 01:01:35.390260 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-09 01:01:35.390272 | orchestrator | 2026-03-09 01:01:35.390284 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-09 01:01:35.390296 | orchestrator | Monday 09 March 2026 01:01:29 +0000 (0:00:07.369) 0:02:41.176 ********** 2026-03-09 01:01:35.390308 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:35.390320 | orchestrator | 2026-03-09 01:01:35.390330 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-09 01:01:35.390341 | orchestrator | Monday 09 March 2026 01:01:29 +0000 (0:00:00.150) 0:02:41.327 ********** 2026-03-09 01:01:35.390351 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:35.390376 | orchestrator | 2026-03-09 01:01:35.390389 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-09 01:01:35.390400 | orchestrator | Monday 09 March 2026 01:01:30 +0000 (0:00:00.120) 0:02:41.447 ********** 2026-03-09 01:01:35.390412 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:35.390423 | orchestrator | 2026-03-09 01:01:35.390435 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-09 01:01:35.390447 | orchestrator | Monday 09 March 2026 01:01:30 +0000 (0:00:00.157) 0:02:41.605 ********** 2026-03-09 01:01:35.390459 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:35.390470 | orchestrator | 2026-03-09 01:01:35.390482 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-09 01:01:35.390494 | orchestrator | Monday 09 March 2026 01:01:30 +0000 (0:00:00.586) 0:02:42.192 ********** 2026-03-09 01:01:35.390505 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:01:35.390518 | orchestrator | 2026-03-09 01:01:35.390530 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-09 01:01:35.390542 | orchestrator | Monday 09 March 2026 01:01:34 +0000 (0:00:03.356) 0:02:45.548 ********** 2026-03-09 01:01:35.390554 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:01:35.390566 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:01:35.390578 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:01:35.390589 | orchestrator | 2026-03-09 01:01:35.390600 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:01:35.390618 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-09 01:01:35.390640 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-09 01:01:35.390651 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-09 01:01:35.390663 | orchestrator | 2026-03-09 01:01:35.390675 | orchestrator | 2026-03-09 01:01:35.390687 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:01:35.390699 | orchestrator | Monday 09 March 2026 01:01:34 +0000 (0:00:00.478) 0:02:46.027 ********** 2026-03-09 01:01:35.390709 | orchestrator | =============================================================================== 2026-03-09 01:01:35.390720 | orchestrator | service-ks-register : keystone | Creating services --------------------- 25.13s 2026-03-09 01:01:35.390795 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 18.78s 2026-03-09 01:01:35.390808 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.23s 2026-03-09 01:01:35.390819 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.55s 2026-03-09 01:01:35.390831 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.98s 2026-03-09 01:01:35.390843 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.33s 2026-03-09 01:01:35.390855 | orchestrator | keystone : Restart keystone container ----------------------------------- 8.00s 2026-03-09 01:01:35.390865 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.37s 2026-03-09 01:01:35.390877 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.15s 2026-03-09 01:01:35.390888 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.14s 2026-03-09 01:01:35.390899 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.81s 2026-03-09 01:01:35.390911 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.64s 2026-03-09 01:01:35.390924 | orchestrator | keystone : Creating default user role ----------------------------------- 3.36s 2026-03-09 01:01:35.390935 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.04s 2026-03-09 01:01:35.390947 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.49s 2026-03-09 01:01:35.390968 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.46s 2026-03-09 01:01:35.390979 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.43s 2026-03-09 01:01:35.390990 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.01s 2026-03-09 01:01:35.391001 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.94s 2026-03-09 01:01:35.391012 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.85s 2026-03-09 01:01:35.391024 | orchestrator | 2026-03-09 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:38.423257 | orchestrator | 2026-03-09 01:01:38 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:01:38.424224 | orchestrator | 2026-03-09 01:01:38 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:01:38.425815 | orchestrator | 2026-03-09 01:01:38 | INFO  | Task 6b9400cd-1665-4096-9d05-efd0d8f1df2c is in state SUCCESS 2026-03-09 01:01:38.426577 | orchestrator | 2026-03-09 01:01:38 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:01:38.427702 | orchestrator | 2026-03-09 01:01:38 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:01:38.428485 | orchestrator | 2026-03-09 01:01:38 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:01:38.428526 | orchestrator | 2026-03-09 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:41.478299 | orchestrator | 2026-03-09 01:01:41 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:01:41.480932 | orchestrator | 2026-03-09 01:01:41 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:01:41.481914 | orchestrator | 2026-03-09 01:01:41 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:01:41.484344 | orchestrator | 2026-03-09 01:01:41 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:01:41.484893 | orchestrator | 2026-03-09 01:01:41 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:01:41.487714 | orchestrator | 2026-03-09 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:44.533460 | orchestrator | 2026-03-09 01:01:44 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:01:44.533677 | orchestrator | 2026-03-09 01:01:44 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:01:44.534411 | orchestrator | 2026-03-09 01:01:44 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:01:44.535173 | orchestrator | 2026-03-09 01:01:44 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:01:44.536477 | orchestrator | 2026-03-09 01:01:44 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:01:44.536512 | orchestrator | 2026-03-09 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:47.590345 | orchestrator | 2026-03-09 01:01:47 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:01:47.590485 | orchestrator | 2026-03-09 01:01:47 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:01:47.591641 | orchestrator | 2026-03-09 01:01:47 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:01:47.593158 | orchestrator | 2026-03-09 01:01:47 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:01:47.595811 | orchestrator | 2026-03-09 01:01:47 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:01:47.597618 | orchestrator | 2026-03-09 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:50.645356 | orchestrator | 2026-03-09 01:01:50 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:01:50.645483 | orchestrator | 2026-03-09 01:01:50 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:01:50.645501 | orchestrator | 2026-03-09 01:01:50 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:01:50.646675 | orchestrator | 2026-03-09 01:01:50 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:01:50.647305 | orchestrator | 2026-03-09 01:01:50 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:01:50.647329 | orchestrator | 2026-03-09 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:53.703147 | orchestrator | 2026-03-09 01:01:53 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:01:53.707864 | orchestrator | 2026-03-09 01:01:53 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:01:53.711013 | orchestrator | 2026-03-09 01:01:53 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:01:53.713517 | orchestrator | 2026-03-09 01:01:53 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:01:53.715868 | orchestrator | 2026-03-09 01:01:53 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:01:53.716037 | orchestrator | 2026-03-09 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:56.768453 | orchestrator | 2026-03-09 01:01:56 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:01:56.770699 | orchestrator | 2026-03-09 01:01:56 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:01:56.772791 | orchestrator | 2026-03-09 01:01:56 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:01:56.775863 | orchestrator | 2026-03-09 01:01:56 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:01:56.779508 | orchestrator | 2026-03-09 01:01:56 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:01:56.779596 | orchestrator | 2026-03-09 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:01:59.820760 | orchestrator | 2026-03-09 01:01:59 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:01:59.821806 | orchestrator | 2026-03-09 01:01:59 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:01:59.823129 | orchestrator | 2026-03-09 01:01:59 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:01:59.824991 | orchestrator | 2026-03-09 01:01:59 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:01:59.826662 | orchestrator | 2026-03-09 01:01:59 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:01:59.826690 | orchestrator | 2026-03-09 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:02.871031 | orchestrator | 2026-03-09 01:02:02 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:02:02.872435 | orchestrator | 2026-03-09 01:02:02 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:02:02.874104 | orchestrator | 2026-03-09 01:02:02 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:02:02.875400 | orchestrator | 2026-03-09 01:02:02 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:02:02.876950 | orchestrator | 2026-03-09 01:02:02 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:02:02.876990 | orchestrator | 2026-03-09 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:05.927448 | orchestrator | 2026-03-09 01:02:05 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:02:05.931021 | orchestrator | 2026-03-09 01:02:05 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:02:05.932820 | orchestrator | 2026-03-09 01:02:05 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:02:05.937391 | orchestrator | 2026-03-09 01:02:05 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:02:05.939829 | orchestrator | 2026-03-09 01:02:05 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:02:05.939914 | orchestrator | 2026-03-09 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:08.996622 | orchestrator | 2026-03-09 01:02:08 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:02:08.997561 | orchestrator | 2026-03-09 01:02:08 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:02:08.999871 | orchestrator | 2026-03-09 01:02:08 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:02:09.001575 | orchestrator | 2026-03-09 01:02:09 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:02:09.002620 | orchestrator | 2026-03-09 01:02:09 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:02:09.002665 | orchestrator | 2026-03-09 01:02:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:12.041688 | orchestrator | 2026-03-09 01:02:12 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:02:12.042653 | orchestrator | 2026-03-09 01:02:12 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:02:12.045904 | orchestrator | 2026-03-09 01:02:12 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:02:12.047119 | orchestrator | 2026-03-09 01:02:12 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:02:12.048758 | orchestrator | 2026-03-09 01:02:12 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:02:12.048840 | orchestrator | 2026-03-09 01:02:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:15.096352 | orchestrator | 2026-03-09 01:02:15 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:02:15.100036 | orchestrator | 2026-03-09 01:02:15 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:02:15.101605 | orchestrator | 2026-03-09 01:02:15 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:02:15.104253 | orchestrator | 2026-03-09 01:02:15 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:02:15.106208 | orchestrator | 2026-03-09 01:02:15 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:02:15.106508 | orchestrator | 2026-03-09 01:02:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:18.153183 | orchestrator | 2026-03-09 01:02:18 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:02:18.157570 | orchestrator | 2026-03-09 01:02:18 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:02:18.162683 | orchestrator | 2026-03-09 01:02:18 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:02:18.166706 | orchestrator | 2026-03-09 01:02:18 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:02:18.169220 | orchestrator | 2026-03-09 01:02:18 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:02:18.169278 | orchestrator | 2026-03-09 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:21.228071 | orchestrator | 2026-03-09 01:02:21 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:02:21.230203 | orchestrator | 2026-03-09 01:02:21 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:02:21.233275 | orchestrator | 2026-03-09 01:02:21 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:02:21.236310 | orchestrator | 2026-03-09 01:02:21 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:02:21.237442 | orchestrator | 2026-03-09 01:02:21 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:02:21.237729 | orchestrator | 2026-03-09 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:24.282347 | orchestrator | 2026-03-09 01:02:24 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:02:24.283448 | orchestrator | 2026-03-09 01:02:24 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:02:24.284503 | orchestrator | 2026-03-09 01:02:24 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:02:24.285487 | orchestrator | 2026-03-09 01:02:24 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:02:24.287915 | orchestrator | 2026-03-09 01:02:24 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:02:24.287979 | orchestrator | 2026-03-09 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:27.329639 | orchestrator | 2026-03-09 01:02:27 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:02:27.331940 | orchestrator | 2026-03-09 01:02:27 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:02:27.337490 | orchestrator | 2026-03-09 01:02:27 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:02:27.340241 | orchestrator | 2026-03-09 01:02:27 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:02:27.342245 | orchestrator | 2026-03-09 01:02:27 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:02:27.342321 | orchestrator | 2026-03-09 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:30.384976 | orchestrator | 2026-03-09 01:02:30 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:02:30.387542 | orchestrator | 2026-03-09 01:02:30 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:02:30.388806 | orchestrator | 2026-03-09 01:02:30 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:02:30.390167 | orchestrator | 2026-03-09 01:02:30 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:02:30.391371 | orchestrator | 2026-03-09 01:02:30 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:02:30.391469 | orchestrator | 2026-03-09 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:33.441091 | orchestrator | 2026-03-09 01:02:33 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:02:33.441379 | orchestrator | 2026-03-09 01:02:33 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:02:33.442530 | orchestrator | 2026-03-09 01:02:33 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:02:33.444592 | orchestrator | 2026-03-09 01:02:33 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:02:33.445428 | orchestrator | 2026-03-09 01:02:33 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:02:33.445486 | orchestrator | 2026-03-09 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:36.485700 | orchestrator | 2026-03-09 01:02:36 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:02:36.487039 | orchestrator | 2026-03-09 01:02:36 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:02:36.489071 | orchestrator | 2026-03-09 01:02:36 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:02:36.489906 | orchestrator | 2026-03-09 01:02:36 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:02:36.490968 | orchestrator | 2026-03-09 01:02:36 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:02:36.491026 | orchestrator | 2026-03-09 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:39.527749 | orchestrator | 2026-03-09 01:02:39 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:02:39.528358 | orchestrator | 2026-03-09 01:02:39 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:02:39.528774 | orchestrator | 2026-03-09 01:02:39 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:02:39.529646 | orchestrator | 2026-03-09 01:02:39 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:02:39.530548 | orchestrator | 2026-03-09 01:02:39 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:02:39.530611 | orchestrator | 2026-03-09 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:42.564760 | orchestrator | 2026-03-09 01:02:42 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:02:42.565717 | orchestrator | 2026-03-09 01:02:42 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:02:42.566422 | orchestrator | 2026-03-09 01:02:42 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:02:42.567420 | orchestrator | 2026-03-09 01:02:42 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:02:42.568399 | orchestrator | 2026-03-09 01:02:42 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:02:42.568433 | orchestrator | 2026-03-09 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:45.647649 | orchestrator | 2026-03-09 01:02:45 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:02:45.648433 | orchestrator | 2026-03-09 01:02:45 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:02:45.649348 | orchestrator | 2026-03-09 01:02:45 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:02:45.650444 | orchestrator | 2026-03-09 01:02:45 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:02:45.651646 | orchestrator | 2026-03-09 01:02:45 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:02:45.651927 | orchestrator | 2026-03-09 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:48.691850 | orchestrator | 2026-03-09 01:02:48 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:02:48.692433 | orchestrator | 2026-03-09 01:02:48 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:02:48.693242 | orchestrator | 2026-03-09 01:02:48 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:02:48.696418 | orchestrator | 2026-03-09 01:02:48 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:02:48.697359 | orchestrator | 2026-03-09 01:02:48 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:02:48.697408 | orchestrator | 2026-03-09 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:51.734449 | orchestrator | 2026-03-09 01:02:51 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:02:51.736260 | orchestrator | 2026-03-09 01:02:51 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:02:51.738694 | orchestrator | 2026-03-09 01:02:51 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:02:51.741199 | orchestrator | 2026-03-09 01:02:51 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:02:51.742248 | orchestrator | 2026-03-09 01:02:51 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:02:51.742547 | orchestrator | 2026-03-09 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:54.781293 | orchestrator | 2026-03-09 01:02:54 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:02:54.781360 | orchestrator | 2026-03-09 01:02:54 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:02:54.784034 | orchestrator | 2026-03-09 01:02:54 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:02:54.786447 | orchestrator | 2026-03-09 01:02:54 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:02:54.787349 | orchestrator | 2026-03-09 01:02:54 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:02:54.787371 | orchestrator | 2026-03-09 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:02:57.836151 | orchestrator | 2026-03-09 01:02:57 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:02:57.837370 | orchestrator | 2026-03-09 01:02:57 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:02:57.839758 | orchestrator | 2026-03-09 01:02:57 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:02:57.840964 | orchestrator | 2026-03-09 01:02:57 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:02:57.842133 | orchestrator | 2026-03-09 01:02:57 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:02:57.842166 | orchestrator | 2026-03-09 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:00.874323 | orchestrator | 2026-03-09 01:03:00 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:03:00.874948 | orchestrator | 2026-03-09 01:03:00 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:03:00.876195 | orchestrator | 2026-03-09 01:03:00 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:03:00.876804 | orchestrator | 2026-03-09 01:03:00 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:03:00.879330 | orchestrator | 2026-03-09 01:03:00 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:03:00.879358 | orchestrator | 2026-03-09 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:03.917302 | orchestrator | 2026-03-09 01:03:03 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:03:03.918172 | orchestrator | 2026-03-09 01:03:03 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:03:03.920769 | orchestrator | 2026-03-09 01:03:03 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:03:03.922011 | orchestrator | 2026-03-09 01:03:03 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:03:03.924428 | orchestrator | 2026-03-09 01:03:03 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:03:03.924630 | orchestrator | 2026-03-09 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:06.962244 | orchestrator | 2026-03-09 01:03:06 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:03:06.964257 | orchestrator | 2026-03-09 01:03:06 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:03:06.965303 | orchestrator | 2026-03-09 01:03:06 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:03:06.966422 | orchestrator | 2026-03-09 01:03:06 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:03:06.967301 | orchestrator | 2026-03-09 01:03:06 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:03:06.967323 | orchestrator | 2026-03-09 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:10.015150 | orchestrator | 2026-03-09 01:03:10 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:03:10.016412 | orchestrator | 2026-03-09 01:03:10 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state STARTED 2026-03-09 01:03:10.017157 | orchestrator | 2026-03-09 01:03:10 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:03:10.018604 | orchestrator | 2026-03-09 01:03:10 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:03:10.020362 | orchestrator | 2026-03-09 01:03:10 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:03:10.020405 | orchestrator | 2026-03-09 01:03:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:13.065129 | orchestrator | 2026-03-09 01:03:13 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:03:13.065759 | orchestrator | 2026-03-09 01:03:13 | INFO  | Task 6cdf273f-3e20-49a5-8ca7-d269642456f8 is in state SUCCESS 2026-03-09 01:03:13.066234 | orchestrator | 2026-03-09 01:03:13.066274 | orchestrator | 2026-03-09 01:03:13.066284 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-09 01:03:13.066294 | orchestrator | 2026-03-09 01:03:13.066303 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-09 01:03:13.066312 | orchestrator | Monday 09 March 2026 01:00:39 +0000 (0:00:00.265) 0:00:00.265 ********** 2026-03-09 01:03:13.066337 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-09 01:03:13.066372 | orchestrator | 2026-03-09 01:03:13.066381 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-09 01:03:13.066392 | orchestrator | Monday 09 March 2026 01:00:39 +0000 (0:00:00.253) 0:00:00.519 ********** 2026-03-09 01:03:13.066399 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-09 01:03:13.066405 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-09 01:03:13.066411 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-09 01:03:13.066417 | orchestrator | 2026-03-09 01:03:13.066423 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-09 01:03:13.066428 | orchestrator | Monday 09 March 2026 01:00:40 +0000 (0:00:01.540) 0:00:02.059 ********** 2026-03-09 01:03:13.066434 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-09 01:03:13.066440 | orchestrator | 2026-03-09 01:03:13.066445 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-09 01:03:13.066451 | orchestrator | Monday 09 March 2026 01:00:42 +0000 (0:00:01.715) 0:00:03.775 ********** 2026-03-09 01:03:13.066456 | orchestrator | changed: [testbed-manager] 2026-03-09 01:03:13.066463 | orchestrator | 2026-03-09 01:03:13.066468 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-09 01:03:13.066474 | orchestrator | Monday 09 March 2026 01:00:43 +0000 (0:00:00.957) 0:00:04.733 ********** 2026-03-09 01:03:13.066479 | orchestrator | changed: [testbed-manager] 2026-03-09 01:03:13.066486 | orchestrator | 2026-03-09 01:03:13.066492 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-09 01:03:13.066497 | orchestrator | Monday 09 March 2026 01:00:44 +0000 (0:00:01.023) 0:00:05.756 ********** 2026-03-09 01:03:13.066503 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-09 01:03:13.066508 | orchestrator | ok: [testbed-manager] 2026-03-09 01:03:13.066514 | orchestrator | 2026-03-09 01:03:13.066520 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-09 01:03:13.066525 | orchestrator | Monday 09 March 2026 01:01:25 +0000 (0:00:41.058) 0:00:46.815 ********** 2026-03-09 01:03:13.066531 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-09 01:03:13.066537 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-09 01:03:13.066543 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-09 01:03:13.066548 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-09 01:03:13.066554 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-09 01:03:13.066559 | orchestrator | 2026-03-09 01:03:13.066565 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-09 01:03:13.066570 | orchestrator | Monday 09 March 2026 01:01:30 +0000 (0:00:04.490) 0:00:51.306 ********** 2026-03-09 01:03:13.066576 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-09 01:03:13.066581 | orchestrator | 2026-03-09 01:03:13.066587 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-09 01:03:13.066592 | orchestrator | Monday 09 March 2026 01:01:30 +0000 (0:00:00.503) 0:00:51.810 ********** 2026-03-09 01:03:13.066598 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:03:13.066603 | orchestrator | 2026-03-09 01:03:13.066609 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-09 01:03:13.066615 | orchestrator | Monday 09 March 2026 01:01:30 +0000 (0:00:00.132) 0:00:51.942 ********** 2026-03-09 01:03:13.066620 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:03:13.066625 | orchestrator | 2026-03-09 01:03:13.066631 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-09 01:03:13.066637 | orchestrator | Monday 09 March 2026 01:01:31 +0000 (0:00:00.621) 0:00:52.563 ********** 2026-03-09 01:03:13.066642 | orchestrator | changed: [testbed-manager] 2026-03-09 01:03:13.066648 | orchestrator | 2026-03-09 01:03:13.066653 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-09 01:03:13.066666 | orchestrator | Monday 09 March 2026 01:01:32 +0000 (0:00:01.578) 0:00:54.141 ********** 2026-03-09 01:03:13.066671 | orchestrator | changed: [testbed-manager] 2026-03-09 01:03:13.066677 | orchestrator | 2026-03-09 01:03:13.066682 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-09 01:03:13.066688 | orchestrator | Monday 09 March 2026 01:01:33 +0000 (0:00:00.862) 0:00:55.004 ********** 2026-03-09 01:03:13.066693 | orchestrator | changed: [testbed-manager] 2026-03-09 01:03:13.066699 | orchestrator | 2026-03-09 01:03:13.066704 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-09 01:03:13.066710 | orchestrator | Monday 09 March 2026 01:01:34 +0000 (0:00:00.711) 0:00:55.715 ********** 2026-03-09 01:03:13.066715 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-09 01:03:13.066721 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-09 01:03:13.066726 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-09 01:03:13.066732 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-09 01:03:13.066737 | orchestrator | 2026-03-09 01:03:13.066743 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:03:13.066749 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-09 01:03:13.066756 | orchestrator | 2026-03-09 01:03:13.066761 | orchestrator | 2026-03-09 01:03:13.066777 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:03:13.066783 | orchestrator | Monday 09 March 2026 01:01:36 +0000 (0:00:02.020) 0:00:57.735 ********** 2026-03-09 01:03:13.066788 | orchestrator | =============================================================================== 2026-03-09 01:03:13.066794 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.06s 2026-03-09 01:03:13.066804 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.49s 2026-03-09 01:03:13.066809 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 2.02s 2026-03-09 01:03:13.066881 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.72s 2026-03-09 01:03:13.066889 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.58s 2026-03-09 01:03:13.066896 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.54s 2026-03-09 01:03:13.066902 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.02s 2026-03-09 01:03:13.066927 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.96s 2026-03-09 01:03:13.066934 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.86s 2026-03-09 01:03:13.066940 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.71s 2026-03-09 01:03:13.066947 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.62s 2026-03-09 01:03:13.066954 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.50s 2026-03-09 01:03:13.066960 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.25s 2026-03-09 01:03:13.066967 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-03-09 01:03:13.066974 | orchestrator | 2026-03-09 01:03:13.066981 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-09 01:03:13.066987 | orchestrator | 2.16.14 2026-03-09 01:03:13.066994 | orchestrator | 2026-03-09 01:03:13.067001 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-09 01:03:13.067007 | orchestrator | 2026-03-09 01:03:13.067014 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-09 01:03:13.067021 | orchestrator | Monday 09 March 2026 01:01:42 +0000 (0:00:00.306) 0:00:00.306 ********** 2026-03-09 01:03:13.067027 | orchestrator | changed: [testbed-manager] 2026-03-09 01:03:13.067036 | orchestrator | 2026-03-09 01:03:13.067045 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-09 01:03:13.067061 | orchestrator | Monday 09 March 2026 01:01:44 +0000 (0:00:02.279) 0:00:02.586 ********** 2026-03-09 01:03:13.067070 | orchestrator | changed: [testbed-manager] 2026-03-09 01:03:13.067080 | orchestrator | 2026-03-09 01:03:13.067090 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-09 01:03:13.067099 | orchestrator | Monday 09 March 2026 01:01:45 +0000 (0:00:01.137) 0:00:03.723 ********** 2026-03-09 01:03:13.067108 | orchestrator | changed: [testbed-manager] 2026-03-09 01:03:13.067117 | orchestrator | 2026-03-09 01:03:13.067127 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-09 01:03:13.067136 | orchestrator | Monday 09 March 2026 01:01:46 +0000 (0:00:01.125) 0:00:04.849 ********** 2026-03-09 01:03:13.067146 | orchestrator | changed: [testbed-manager] 2026-03-09 01:03:13.067155 | orchestrator | 2026-03-09 01:03:13.067164 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-09 01:03:13.067173 | orchestrator | Monday 09 March 2026 01:01:47 +0000 (0:00:01.256) 0:00:06.105 ********** 2026-03-09 01:03:13.067180 | orchestrator | changed: [testbed-manager] 2026-03-09 01:03:13.067187 | orchestrator | 2026-03-09 01:03:13.067193 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-09 01:03:13.067198 | orchestrator | Monday 09 March 2026 01:01:49 +0000 (0:00:01.421) 0:00:07.527 ********** 2026-03-09 01:03:13.067203 | orchestrator | changed: [testbed-manager] 2026-03-09 01:03:13.067209 | orchestrator | 2026-03-09 01:03:13.067214 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-09 01:03:13.067220 | orchestrator | Monday 09 March 2026 01:01:50 +0000 (0:00:01.232) 0:00:08.759 ********** 2026-03-09 01:03:13.067225 | orchestrator | changed: [testbed-manager] 2026-03-09 01:03:13.067231 | orchestrator | 2026-03-09 01:03:13.067236 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-09 01:03:13.067242 | orchestrator | Monday 09 March 2026 01:01:52 +0000 (0:00:02.069) 0:00:10.828 ********** 2026-03-09 01:03:13.067247 | orchestrator | changed: [testbed-manager] 2026-03-09 01:03:13.067253 | orchestrator | 2026-03-09 01:03:13.067258 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-09 01:03:13.067264 | orchestrator | Monday 09 March 2026 01:01:54 +0000 (0:00:01.400) 0:00:12.229 ********** 2026-03-09 01:03:13.067269 | orchestrator | changed: [testbed-manager] 2026-03-09 01:03:13.067274 | orchestrator | 2026-03-09 01:03:13.067280 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-09 01:03:13.067285 | orchestrator | Monday 09 March 2026 01:02:46 +0000 (0:00:51.992) 0:01:04.221 ********** 2026-03-09 01:03:13.067291 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:03:13.067296 | orchestrator | 2026-03-09 01:03:13.067302 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-09 01:03:13.067307 | orchestrator | 2026-03-09 01:03:13.067313 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-09 01:03:13.067318 | orchestrator | Monday 09 March 2026 01:02:46 +0000 (0:00:00.213) 0:01:04.435 ********** 2026-03-09 01:03:13.067323 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:03:13.067329 | orchestrator | 2026-03-09 01:03:13.067334 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-09 01:03:13.067340 | orchestrator | 2026-03-09 01:03:13.067345 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-09 01:03:13.067351 | orchestrator | Monday 09 March 2026 01:02:58 +0000 (0:00:11.884) 0:01:16.319 ********** 2026-03-09 01:03:13.067356 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:03:13.067362 | orchestrator | 2026-03-09 01:03:13.067375 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-09 01:03:13.067380 | orchestrator | 2026-03-09 01:03:13.067386 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-09 01:03:13.067392 | orchestrator | Monday 09 March 2026 01:03:09 +0000 (0:00:11.415) 0:01:27.735 ********** 2026-03-09 01:03:13.067403 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:03:13.067408 | orchestrator | 2026-03-09 01:03:13.067418 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:03:13.067424 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-09 01:03:13.067431 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:03:13.067436 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:03:13.067442 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:03:13.067448 | orchestrator | 2026-03-09 01:03:13.067453 | orchestrator | 2026-03-09 01:03:13.067459 | orchestrator | 2026-03-09 01:03:13.067464 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:03:13.067470 | orchestrator | Monday 09 March 2026 01:03:10 +0000 (0:00:01.135) 0:01:28.871 ********** 2026-03-09 01:03:13.067475 | orchestrator | =============================================================================== 2026-03-09 01:03:13.067481 | orchestrator | Create admin user ------------------------------------------------------ 51.99s 2026-03-09 01:03:13.067486 | orchestrator | Restart ceph manager service ------------------------------------------- 24.44s 2026-03-09 01:03:13.067492 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.28s 2026-03-09 01:03:13.067497 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.07s 2026-03-09 01:03:13.067503 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.42s 2026-03-09 01:03:13.067508 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.40s 2026-03-09 01:03:13.067514 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.26s 2026-03-09 01:03:13.067519 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.23s 2026-03-09 01:03:13.067525 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.14s 2026-03-09 01:03:13.067530 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.13s 2026-03-09 01:03:13.067536 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.21s 2026-03-09 01:03:13.067602 | orchestrator | 2026-03-09 01:03:13 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:03:13.067991 | orchestrator | 2026-03-09 01:03:13 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:03:13.069796 | orchestrator | 2026-03-09 01:03:13 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:03:13.069841 | orchestrator | 2026-03-09 01:03:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:16.104973 | orchestrator | 2026-03-09 01:03:16 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:03:16.105986 | orchestrator | 2026-03-09 01:03:16 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:03:16.107985 | orchestrator | 2026-03-09 01:03:16 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:03:16.109279 | orchestrator | 2026-03-09 01:03:16 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:03:16.109309 | orchestrator | 2026-03-09 01:03:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:19.150815 | orchestrator | 2026-03-09 01:03:19 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:03:19.153043 | orchestrator | 2026-03-09 01:03:19 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:03:19.154010 | orchestrator | 2026-03-09 01:03:19 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:03:19.155101 | orchestrator | 2026-03-09 01:03:19 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:03:19.155136 | orchestrator | 2026-03-09 01:03:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:22.188182 | orchestrator | 2026-03-09 01:03:22 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:03:22.188300 | orchestrator | 2026-03-09 01:03:22 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:03:22.189180 | orchestrator | 2026-03-09 01:03:22 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:03:22.189452 | orchestrator | 2026-03-09 01:03:22 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:03:22.189565 | orchestrator | 2026-03-09 01:03:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:25.230899 | orchestrator | 2026-03-09 01:03:25 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:03:25.234250 | orchestrator | 2026-03-09 01:03:25 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:03:25.238387 | orchestrator | 2026-03-09 01:03:25 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:03:25.240316 | orchestrator | 2026-03-09 01:03:25 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:03:25.240798 | orchestrator | 2026-03-09 01:03:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:28.280207 | orchestrator | 2026-03-09 01:03:28 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:03:28.280591 | orchestrator | 2026-03-09 01:03:28 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:03:28.283220 | orchestrator | 2026-03-09 01:03:28 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:03:28.283831 | orchestrator | 2026-03-09 01:03:28 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:03:28.283857 | orchestrator | 2026-03-09 01:03:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:31.326240 | orchestrator | 2026-03-09 01:03:31 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:03:31.327404 | orchestrator | 2026-03-09 01:03:31 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:03:31.328641 | orchestrator | 2026-03-09 01:03:31 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:03:31.329531 | orchestrator | 2026-03-09 01:03:31 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:03:31.329748 | orchestrator | 2026-03-09 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:34.394485 | orchestrator | 2026-03-09 01:03:34 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:03:34.395169 | orchestrator | 2026-03-09 01:03:34 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:03:34.395756 | orchestrator | 2026-03-09 01:03:34 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:03:34.396833 | orchestrator | 2026-03-09 01:03:34 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:03:34.396865 | orchestrator | 2026-03-09 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:37.433291 | orchestrator | 2026-03-09 01:03:37 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:03:37.434251 | orchestrator | 2026-03-09 01:03:37 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:03:37.436070 | orchestrator | 2026-03-09 01:03:37 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:03:37.439194 | orchestrator | 2026-03-09 01:03:37 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:03:37.439232 | orchestrator | 2026-03-09 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:40.499292 | orchestrator | 2026-03-09 01:03:40 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:03:40.502697 | orchestrator | 2026-03-09 01:03:40 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:03:40.504518 | orchestrator | 2026-03-09 01:03:40 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:03:40.505617 | orchestrator | 2026-03-09 01:03:40 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:03:40.505662 | orchestrator | 2026-03-09 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:43.551504 | orchestrator | 2026-03-09 01:03:43 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:03:43.554295 | orchestrator | 2026-03-09 01:03:43 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:03:43.555821 | orchestrator | 2026-03-09 01:03:43 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:03:43.557785 | orchestrator | 2026-03-09 01:03:43 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:03:43.557820 | orchestrator | 2026-03-09 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:46.596898 | orchestrator | 2026-03-09 01:03:46 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:03:46.598095 | orchestrator | 2026-03-09 01:03:46 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:03:46.599326 | orchestrator | 2026-03-09 01:03:46 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:03:46.601241 | orchestrator | 2026-03-09 01:03:46 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:03:46.601303 | orchestrator | 2026-03-09 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:49.647928 | orchestrator | 2026-03-09 01:03:49 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:03:49.648585 | orchestrator | 2026-03-09 01:03:49 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:03:49.650633 | orchestrator | 2026-03-09 01:03:49 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:03:49.651688 | orchestrator | 2026-03-09 01:03:49 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:03:49.651737 | orchestrator | 2026-03-09 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:52.696027 | orchestrator | 2026-03-09 01:03:52 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:03:52.696535 | orchestrator | 2026-03-09 01:03:52 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:03:52.697278 | orchestrator | 2026-03-09 01:03:52 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:03:52.698435 | orchestrator | 2026-03-09 01:03:52 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:03:52.698482 | orchestrator | 2026-03-09 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:55.748647 | orchestrator | 2026-03-09 01:03:55 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:03:55.750139 | orchestrator | 2026-03-09 01:03:55 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:03:55.751267 | orchestrator | 2026-03-09 01:03:55 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:03:55.752146 | orchestrator | 2026-03-09 01:03:55 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state STARTED 2026-03-09 01:03:55.752176 | orchestrator | 2026-03-09 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:03:58.793469 | orchestrator | 2026-03-09 01:03:58 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:03:58.795554 | orchestrator | 2026-03-09 01:03:58 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:03:58.796889 | orchestrator | 2026-03-09 01:03:58 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:03:58.797516 | orchestrator | 2026-03-09 01:03:58 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:03:58.799301 | orchestrator | 2026-03-09 01:03:58 | INFO  | Task 41ed5419-6452-489b-ac6e-6109ac122132 is in state SUCCESS 2026-03-09 01:03:58.800736 | orchestrator | 2026-03-09 01:03:58.800782 | orchestrator | 2026-03-09 01:03:58.800789 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:03:58.800795 | orchestrator | 2026-03-09 01:03:58.800799 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:03:58.800803 | orchestrator | Monday 09 March 2026 01:01:41 +0000 (0:00:00.487) 0:00:00.487 ********** 2026-03-09 01:03:58.800808 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:03:58.800813 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:03:58.800817 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:03:58.800821 | orchestrator | 2026-03-09 01:03:58.800825 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:03:58.800829 | orchestrator | Monday 09 March 2026 01:01:41 +0000 (0:00:00.527) 0:00:01.014 ********** 2026-03-09 01:03:58.800833 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-09 01:03:58.800838 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-09 01:03:58.800842 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-09 01:03:58.800846 | orchestrator | 2026-03-09 01:03:58.800850 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-09 01:03:58.800854 | orchestrator | 2026-03-09 01:03:58.800858 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-09 01:03:58.800862 | orchestrator | Monday 09 March 2026 01:01:42 +0000 (0:00:00.979) 0:00:01.994 ********** 2026-03-09 01:03:58.800866 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:03:58.800871 | orchestrator | 2026-03-09 01:03:58.800875 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-09 01:03:58.800879 | orchestrator | Monday 09 March 2026 01:01:43 +0000 (0:00:00.741) 0:00:02.735 ********** 2026-03-09 01:03:58.800883 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-09 01:03:58.800887 | orchestrator | 2026-03-09 01:03:58.800891 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-09 01:03:58.800907 | orchestrator | Monday 09 March 2026 01:01:47 +0000 (0:00:03.740) 0:00:06.476 ********** 2026-03-09 01:03:58.800911 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-09 01:03:58.800929 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-09 01:03:58.800933 | orchestrator | 2026-03-09 01:03:58.800937 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-09 01:03:58.800941 | orchestrator | Monday 09 March 2026 01:01:54 +0000 (0:00:07.292) 0:00:13.769 ********** 2026-03-09 01:03:58.800945 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-09 01:03:58.800949 | orchestrator | 2026-03-09 01:03:58.800953 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-09 01:03:58.800957 | orchestrator | Monday 09 March 2026 01:01:58 +0000 (0:00:03.656) 0:00:17.425 ********** 2026-03-09 01:03:58.800961 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-09 01:03:58.800965 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:03:58.800969 | orchestrator | 2026-03-09 01:03:58.800973 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-09 01:03:58.800977 | orchestrator | Monday 09 March 2026 01:02:02 +0000 (0:00:04.220) 0:00:21.645 ********** 2026-03-09 01:03:58.800981 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:03:58.800984 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-09 01:03:58.800988 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-09 01:03:58.801045 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-09 01:03:58.801051 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-09 01:03:58.801055 | orchestrator | 2026-03-09 01:03:58.801059 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-09 01:03:58.801063 | orchestrator | Monday 09 March 2026 01:02:18 +0000 (0:00:16.223) 0:00:37.869 ********** 2026-03-09 01:03:58.801066 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-09 01:03:58.801070 | orchestrator | 2026-03-09 01:03:58.801074 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-09 01:03:58.801078 | orchestrator | Monday 09 March 2026 01:02:23 +0000 (0:00:04.811) 0:00:42.681 ********** 2026-03-09 01:03:58.801083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:03:58.801100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:03:58.801113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:03:58.801118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.801124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.801128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.801136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.801143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.801150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.801154 | orchestrator | 2026-03-09 01:03:58.801160 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-09 01:03:58.801164 | orchestrator | Monday 09 March 2026 01:02:26 +0000 (0:00:02.779) 0:00:45.461 ********** 2026-03-09 01:03:58.801168 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-09 01:03:58.801172 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-09 01:03:58.801176 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-09 01:03:58.801179 | orchestrator | 2026-03-09 01:03:58.801183 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-09 01:03:58.801187 | orchestrator | Monday 09 March 2026 01:02:27 +0000 (0:00:01.264) 0:00:46.725 ********** 2026-03-09 01:03:58.801191 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:58.801195 | orchestrator | 2026-03-09 01:03:58.801199 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-09 01:03:58.801203 | orchestrator | Monday 09 March 2026 01:02:27 +0000 (0:00:00.120) 0:00:46.846 ********** 2026-03-09 01:03:58.801207 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:58.801211 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:03:58.801215 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:03:58.801219 | orchestrator | 2026-03-09 01:03:58.801223 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-09 01:03:58.801227 | orchestrator | Monday 09 March 2026 01:02:28 +0000 (0:00:01.052) 0:00:47.898 ********** 2026-03-09 01:03:58.801231 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:03:58.801235 | orchestrator | 2026-03-09 01:03:58.801291 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-09 01:03:58.801296 | orchestrator | Monday 09 March 2026 01:02:29 +0000 (0:00:01.190) 0:00:49.089 ********** 2026-03-09 01:03:58.801300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:03:58.801309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:03:58.801317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:03:58.801324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.801328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.801332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.801336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.801351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.801355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.801359 | orchestrator | 2026-03-09 01:03:58.801363 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-09 01:03:58.801367 | orchestrator | Monday 09 March 2026 01:02:34 +0000 (0:00:04.794) 0:00:53.883 ********** 2026-03-09 01:03:58.801374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-09 01:03:58.801378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:03:58.801383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:03:58.801387 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:58.801394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-09 01:03:58.801402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:03:58.801406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:03:58.801412 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:03:58.801417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-09 01:03:58.801421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:03:58.801425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:03:58.801432 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:03:58.801436 | orchestrator | 2026-03-09 01:03:58.801440 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-09 01:03:58.801444 | orchestrator | Monday 09 March 2026 01:02:36 +0000 (0:00:01.632) 0:00:55.516 ********** 2026-03-09 01:03:58.801452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-09 01:03:58.801457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:03:58.801463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:03:58.801467 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:58.801471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-09 01:03:58.801475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:03:58.801485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:03:58.801489 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:03:58.801496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-09 01:03:58.801501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:03:58.801505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:03:58.801509 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:03:58.801513 | orchestrator | 2026-03-09 01:03:58.801517 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-09 01:03:58.801521 | orchestrator | Monday 09 March 2026 01:02:37 +0000 (0:00:01.037) 0:00:56.553 ********** 2026-03-09 01:03:58.801525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:03:58.801717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:03:58.801731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:03:58.801738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.801742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.801746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.801755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.801763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.801767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.801771 | orchestrator | 2026-03-09 01:03:58.801775 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-09 01:03:58.801779 | orchestrator | Monday 09 March 2026 01:02:41 +0000 (0:00:04.521) 0:01:01.075 ********** 2026-03-09 01:03:58.801783 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:03:58.801787 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:03:58.801791 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:03:58.801794 | orchestrator | 2026-03-09 01:03:58.801798 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-09 01:03:58.801802 | orchestrator | Monday 09 March 2026 01:02:46 +0000 (0:00:04.320) 0:01:05.395 ********** 2026-03-09 01:03:58.801806 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:03:58.801810 | orchestrator | 2026-03-09 01:03:58.801814 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-09 01:03:58.801818 | orchestrator | Monday 09 March 2026 01:02:48 +0000 (0:00:02.699) 0:01:08.095 ********** 2026-03-09 01:03:58.801821 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:58.801825 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:03:58.801831 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:03:58.801835 | orchestrator | 2026-03-09 01:03:58.801839 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-09 01:03:58.801843 | orchestrator | Monday 09 March 2026 01:02:49 +0000 (0:00:00.753) 0:01:08.849 ********** 2026-03-09 01:03:58.801847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:03:58.801855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:03:58.801862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:03:58.801866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.801873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.801877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.801884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.801888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.801892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.801896 | orchestrator | 2026-03-09 01:03:58.801899 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-09 01:03:58.801906 | orchestrator | Monday 09 March 2026 01:03:01 +0000 (0:00:11.917) 0:01:20.766 ********** 2026-03-09 01:03:58.801910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-09 01:03:58.801916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:03:58.801923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:03:58.801927 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:58.801931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-09 01:03:58.801935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:03:58.801943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:03:58.801947 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:03:58.801951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-09 01:03:58.801958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-09 01:03:58.801965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:03:58.801969 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:03:58.801973 | orchestrator | 2026-03-09 01:03:58.801977 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-09 01:03:58.801980 | orchestrator | Monday 09 March 2026 01:03:03 +0000 (0:00:02.254) 0:01:23.020 ********** 2026-03-09 01:03:58.801984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:03:58.801992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:03:58.802055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-09 01:03:58.802066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.802071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.802075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.802079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.802088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.802092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:03:58.802100 | orchestrator | 2026-03-09 01:03:58.802104 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-09 01:03:58.802108 | orchestrator | Monday 09 March 2026 01:03:08 +0000 (0:00:04.808) 0:01:27.829 ********** 2026-03-09 01:03:58.802112 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:03:58.802116 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:03:58.802120 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:03:58.802123 | orchestrator | 2026-03-09 01:03:58.802127 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-09 01:03:58.802135 | orchestrator | Monday 09 March 2026 01:03:09 +0000 (0:00:01.109) 0:01:28.938 ********** 2026-03-09 01:03:58.802139 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:03:58.802143 | orchestrator | 2026-03-09 01:03:58.802147 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-09 01:03:58.802151 | orchestrator | Monday 09 March 2026 01:03:12 +0000 (0:00:02.476) 0:01:31.414 ********** 2026-03-09 01:03:58.802154 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:03:58.802158 | orchestrator | 2026-03-09 01:03:58.802162 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-09 01:03:58.802166 | orchestrator | Monday 09 March 2026 01:03:15 +0000 (0:00:02.833) 0:01:34.248 ********** 2026-03-09 01:03:58.802170 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:03:58.802173 | orchestrator | 2026-03-09 01:03:58.802177 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-09 01:03:58.802181 | orchestrator | Monday 09 March 2026 01:03:29 +0000 (0:00:14.059) 0:01:48.308 ********** 2026-03-09 01:03:58.802185 | orchestrator | 2026-03-09 01:03:58.802189 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-09 01:03:58.802193 | orchestrator | Monday 09 March 2026 01:03:29 +0000 (0:00:00.074) 0:01:48.382 ********** 2026-03-09 01:03:58.802197 | orchestrator | 2026-03-09 01:03:58.802200 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-09 01:03:58.802204 | orchestrator | Monday 09 March 2026 01:03:29 +0000 (0:00:00.080) 0:01:48.463 ********** 2026-03-09 01:03:58.802208 | orchestrator | 2026-03-09 01:03:58.802212 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-09 01:03:58.802216 | orchestrator | Monday 09 March 2026 01:03:29 +0000 (0:00:00.076) 0:01:48.539 ********** 2026-03-09 01:03:58.802220 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:03:58.802223 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:03:58.802227 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:03:58.802231 | orchestrator | 2026-03-09 01:03:58.802235 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-09 01:03:58.802239 | orchestrator | Monday 09 March 2026 01:03:37 +0000 (0:00:08.562) 0:01:57.102 ********** 2026-03-09 01:03:58.802242 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:03:58.802246 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:03:58.802250 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:03:58.802254 | orchestrator | 2026-03-09 01:03:58.802258 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-09 01:03:58.802262 | orchestrator | Monday 09 March 2026 01:03:47 +0000 (0:00:09.515) 0:02:06.617 ********** 2026-03-09 01:03:58.802265 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:03:58.802269 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:03:58.802273 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:03:58.802277 | orchestrator | 2026-03-09 01:03:58.802280 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:03:58.802285 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:03:58.802290 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 01:03:58.802294 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 01:03:58.802301 | orchestrator | 2026-03-09 01:03:58.802305 | orchestrator | 2026-03-09 01:03:58.802308 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:03:58.802312 | orchestrator | Monday 09 March 2026 01:03:54 +0000 (0:00:07.478) 0:02:14.096 ********** 2026-03-09 01:03:58.802316 | orchestrator | =============================================================================== 2026-03-09 01:03:58.802320 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.22s 2026-03-09 01:03:58.802326 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 14.06s 2026-03-09 01:03:58.802330 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 11.92s 2026-03-09 01:03:58.802334 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.51s 2026-03-09 01:03:58.802338 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.56s 2026-03-09 01:03:58.802341 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 7.48s 2026-03-09 01:03:58.802345 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.29s 2026-03-09 01:03:58.802349 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.81s 2026-03-09 01:03:58.802353 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.81s 2026-03-09 01:03:58.802356 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.79s 2026-03-09 01:03:58.802360 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.52s 2026-03-09 01:03:58.802364 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 4.32s 2026-03-09 01:03:58.802367 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.22s 2026-03-09 01:03:58.802371 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.74s 2026-03-09 01:03:58.802375 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.66s 2026-03-09 01:03:58.802379 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.83s 2026-03-09 01:03:58.802383 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.78s 2026-03-09 01:03:58.802386 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.70s 2026-03-09 01:03:58.802393 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.48s 2026-03-09 01:03:58.802396 | orchestrator | barbican : Copying over existing policy file ---------------------------- 2.25s 2026-03-09 01:03:58.802400 | orchestrator | 2026-03-09 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:01.909412 | orchestrator | 2026-03-09 01:04:01 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:04:01.909491 | orchestrator | 2026-03-09 01:04:01 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:04:01.909499 | orchestrator | 2026-03-09 01:04:01 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:04:01.909506 | orchestrator | 2026-03-09 01:04:01 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:04:01.909513 | orchestrator | 2026-03-09 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:04.913668 | orchestrator | 2026-03-09 01:04:04 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:04:04.917840 | orchestrator | 2026-03-09 01:04:04 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:04:04.919157 | orchestrator | 2026-03-09 01:04:04 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:04:04.920748 | orchestrator | 2026-03-09 01:04:04 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:04:04.920805 | orchestrator | 2026-03-09 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:07.957371 | orchestrator | 2026-03-09 01:04:07 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:04:07.960551 | orchestrator | 2026-03-09 01:04:07 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:04:07.961322 | orchestrator | 2026-03-09 01:04:07 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:04:07.962234 | orchestrator | 2026-03-09 01:04:07 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:04:07.962337 | orchestrator | 2026-03-09 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:10.994477 | orchestrator | 2026-03-09 01:04:10 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:04:10.995168 | orchestrator | 2026-03-09 01:04:10 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:04:10.995872 | orchestrator | 2026-03-09 01:04:10 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:04:11.000090 | orchestrator | 2026-03-09 01:04:10 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:04:11.000172 | orchestrator | 2026-03-09 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:14.057125 | orchestrator | 2026-03-09 01:04:14 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:04:14.057899 | orchestrator | 2026-03-09 01:04:14 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:04:14.058950 | orchestrator | 2026-03-09 01:04:14 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:04:14.060432 | orchestrator | 2026-03-09 01:04:14 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:04:14.060475 | orchestrator | 2026-03-09 01:04:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:17.098879 | orchestrator | 2026-03-09 01:04:17 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:04:17.100644 | orchestrator | 2026-03-09 01:04:17 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:04:17.102864 | orchestrator | 2026-03-09 01:04:17 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:04:17.104378 | orchestrator | 2026-03-09 01:04:17 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:04:17.104408 | orchestrator | 2026-03-09 01:04:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:20.154892 | orchestrator | 2026-03-09 01:04:20 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:04:20.155678 | orchestrator | 2026-03-09 01:04:20 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:04:20.156431 | orchestrator | 2026-03-09 01:04:20 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:04:20.157349 | orchestrator | 2026-03-09 01:04:20 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:04:20.157387 | orchestrator | 2026-03-09 01:04:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:23.193555 | orchestrator | 2026-03-09 01:04:23 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:04:23.193783 | orchestrator | 2026-03-09 01:04:23 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:04:23.194851 | orchestrator | 2026-03-09 01:04:23 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:04:23.195600 | orchestrator | 2026-03-09 01:04:23 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:04:23.195634 | orchestrator | 2026-03-09 01:04:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:26.238973 | orchestrator | 2026-03-09 01:04:26 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:04:26.239171 | orchestrator | 2026-03-09 01:04:26 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:04:26.240174 | orchestrator | 2026-03-09 01:04:26 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:04:26.241759 | orchestrator | 2026-03-09 01:04:26 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:04:26.241805 | orchestrator | 2026-03-09 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:29.284062 | orchestrator | 2026-03-09 01:04:29 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:04:29.284257 | orchestrator | 2026-03-09 01:04:29 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:04:29.285177 | orchestrator | 2026-03-09 01:04:29 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:04:29.285968 | orchestrator | 2026-03-09 01:04:29 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:04:29.286079 | orchestrator | 2026-03-09 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:32.355695 | orchestrator | 2026-03-09 01:04:32 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:04:32.361580 | orchestrator | 2026-03-09 01:04:32 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:04:32.366250 | orchestrator | 2026-03-09 01:04:32 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:04:32.368270 | orchestrator | 2026-03-09 01:04:32 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:04:32.368334 | orchestrator | 2026-03-09 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:35.430459 | orchestrator | 2026-03-09 01:04:35 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:04:35.433328 | orchestrator | 2026-03-09 01:04:35 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:04:35.435782 | orchestrator | 2026-03-09 01:04:35 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:04:35.438362 | orchestrator | 2026-03-09 01:04:35 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:04:35.438402 | orchestrator | 2026-03-09 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:38.477806 | orchestrator | 2026-03-09 01:04:38 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:04:38.478529 | orchestrator | 2026-03-09 01:04:38 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:04:38.480508 | orchestrator | 2026-03-09 01:04:38 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:04:38.482532 | orchestrator | 2026-03-09 01:04:38 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:04:38.482586 | orchestrator | 2026-03-09 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:41.515726 | orchestrator | 2026-03-09 01:04:41 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:04:41.516307 | orchestrator | 2026-03-09 01:04:41 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:04:41.517621 | orchestrator | 2026-03-09 01:04:41 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:04:41.518855 | orchestrator | 2026-03-09 01:04:41 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:04:41.518916 | orchestrator | 2026-03-09 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:44.554828 | orchestrator | 2026-03-09 01:04:44 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:04:44.555614 | orchestrator | 2026-03-09 01:04:44 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:04:44.558338 | orchestrator | 2026-03-09 01:04:44 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:04:44.559310 | orchestrator | 2026-03-09 01:04:44 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:04:44.559344 | orchestrator | 2026-03-09 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:47.611571 | orchestrator | 2026-03-09 01:04:47 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:04:47.612313 | orchestrator | 2026-03-09 01:04:47 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:04:47.613255 | orchestrator | 2026-03-09 01:04:47 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:04:47.614248 | orchestrator | 2026-03-09 01:04:47 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:04:47.614354 | orchestrator | 2026-03-09 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:50.655547 | orchestrator | 2026-03-09 01:04:50 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:04:50.657638 | orchestrator | 2026-03-09 01:04:50 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:04:50.659604 | orchestrator | 2026-03-09 01:04:50 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:04:50.661757 | orchestrator | 2026-03-09 01:04:50 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:04:50.661802 | orchestrator | 2026-03-09 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:53.714908 | orchestrator | 2026-03-09 01:04:53 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:04:53.717548 | orchestrator | 2026-03-09 01:04:53 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:04:53.720209 | orchestrator | 2026-03-09 01:04:53 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:04:53.723025 | orchestrator | 2026-03-09 01:04:53 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:04:53.723617 | orchestrator | 2026-03-09 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:56.768056 | orchestrator | 2026-03-09 01:04:56 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:04:56.770010 | orchestrator | 2026-03-09 01:04:56 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:04:56.772609 | orchestrator | 2026-03-09 01:04:56 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:04:56.774188 | orchestrator | 2026-03-09 01:04:56 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:04:56.774325 | orchestrator | 2026-03-09 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:04:59.814622 | orchestrator | 2026-03-09 01:04:59 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:04:59.815269 | orchestrator | 2026-03-09 01:04:59 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:04:59.816157 | orchestrator | 2026-03-09 01:04:59 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:04:59.817163 | orchestrator | 2026-03-09 01:04:59 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:04:59.817198 | orchestrator | 2026-03-09 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:02.854920 | orchestrator | 2026-03-09 01:05:02 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:05:02.855935 | orchestrator | 2026-03-09 01:05:02 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:05:02.857554 | orchestrator | 2026-03-09 01:05:02 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:05:02.859001 | orchestrator | 2026-03-09 01:05:02 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:05:02.859121 | orchestrator | 2026-03-09 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:05.902562 | orchestrator | 2026-03-09 01:05:05 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:05:05.906391 | orchestrator | 2026-03-09 01:05:05 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:05:05.909601 | orchestrator | 2026-03-09 01:05:05 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state STARTED 2026-03-09 01:05:05.911777 | orchestrator | 2026-03-09 01:05:05 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:05:05.911827 | orchestrator | 2026-03-09 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:08.966811 | orchestrator | 2026-03-09 01:05:08 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:05:08.968640 | orchestrator | 2026-03-09 01:05:08 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:05:08.973769 | orchestrator | 2026-03-09 01:05:08 | INFO  | Task 6b8fbadf-b583-4bae-bec2-0ca1ce4d812e is in state SUCCESS 2026-03-09 01:05:08.976162 | orchestrator | 2026-03-09 01:05:08.976222 | orchestrator | 2026-03-09 01:05:08.976231 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:05:08.976238 | orchestrator | 2026-03-09 01:05:08.976243 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:05:08.976248 | orchestrator | Monday 09 March 2026 01:01:42 +0000 (0:00:00.362) 0:00:00.362 ********** 2026-03-09 01:05:08.976253 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:05:08.976259 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:05:08.976264 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:05:08.976269 | orchestrator | 2026-03-09 01:05:08.976274 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:05:08.976285 | orchestrator | Monday 09 March 2026 01:01:42 +0000 (0:00:00.464) 0:00:00.827 ********** 2026-03-09 01:05:08.976291 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-09 01:05:08.976296 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-09 01:05:08.976301 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-09 01:05:08.976305 | orchestrator | 2026-03-09 01:05:08.976315 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-09 01:05:08.976320 | orchestrator | 2026-03-09 01:05:08.976324 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-09 01:05:08.976346 | orchestrator | Monday 09 March 2026 01:01:43 +0000 (0:00:00.661) 0:00:01.489 ********** 2026-03-09 01:05:08.976351 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:05:08.976357 | orchestrator | 2026-03-09 01:05:08.976362 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-09 01:05:08.976366 | orchestrator | Monday 09 March 2026 01:01:44 +0000 (0:00:00.737) 0:00:02.226 ********** 2026-03-09 01:05:08.976371 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-09 01:05:08.976375 | orchestrator | 2026-03-09 01:05:08.976380 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-09 01:05:08.976384 | orchestrator | Monday 09 March 2026 01:01:48 +0000 (0:00:04.028) 0:00:06.255 ********** 2026-03-09 01:05:08.976389 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-09 01:05:08.976394 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-09 01:05:08.976399 | orchestrator | 2026-03-09 01:05:08.976404 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-09 01:05:08.976409 | orchestrator | Monday 09 March 2026 01:01:55 +0000 (0:00:07.006) 0:00:13.261 ********** 2026-03-09 01:05:08.976414 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-09 01:05:08.976419 | orchestrator | 2026-03-09 01:05:08.976424 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-09 01:05:08.976428 | orchestrator | Monday 09 March 2026 01:01:58 +0000 (0:00:03.845) 0:00:17.106 ********** 2026-03-09 01:05:08.976433 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-09 01:05:08.976438 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:05:08.976442 | orchestrator | 2026-03-09 01:05:08.976447 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-09 01:05:08.976451 | orchestrator | Monday 09 March 2026 01:02:03 +0000 (0:00:04.585) 0:00:21.692 ********** 2026-03-09 01:05:08.976456 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:05:08.976461 | orchestrator | 2026-03-09 01:05:08.976465 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-09 01:05:08.976470 | orchestrator | Monday 09 March 2026 01:02:07 +0000 (0:00:03.552) 0:00:25.244 ********** 2026-03-09 01:05:08.976475 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-09 01:05:08.976479 | orchestrator | 2026-03-09 01:05:08.976484 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-09 01:05:08.976489 | orchestrator | Monday 09 March 2026 01:02:10 +0000 (0:00:03.609) 0:00:28.853 ********** 2026-03-09 01:05:08.976506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:05:08.976525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:05:08.976535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:05:08.976541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976640 | orchestrator | 2026-03-09 01:05:08.976645 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-09 01:05:08.976650 | orchestrator | Monday 09 March 2026 01:02:13 +0000 (0:00:03.120) 0:00:31.973 ********** 2026-03-09 01:05:08.976654 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:08.976659 | orchestrator | 2026-03-09 01:05:08.976664 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-09 01:05:08.976668 | orchestrator | Monday 09 March 2026 01:02:13 +0000 (0:00:00.154) 0:00:32.128 ********** 2026-03-09 01:05:08.976672 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:08.976677 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:05:08.976682 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:05:08.976686 | orchestrator | 2026-03-09 01:05:08.976691 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-09 01:05:08.976695 | orchestrator | Monday 09 March 2026 01:02:14 +0000 (0:00:00.337) 0:00:32.465 ********** 2026-03-09 01:05:08.976700 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:05:08.976705 | orchestrator | 2026-03-09 01:05:08.976709 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-09 01:05:08.976714 | orchestrator | Monday 09 March 2026 01:02:15 +0000 (0:00:00.798) 0:00:33.264 ********** 2026-03-09 01:05:08.976727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:05:08.976735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:05:08.976741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:05:08.976746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.976834 | orchestrator | 2026-03-09 01:05:08.976839 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-09 01:05:08.976846 | orchestrator | Monday 09 March 2026 01:02:21 +0000 (0:00:06.642) 0:00:39.907 ********** 2026-03-09 01:05:08.976857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:05:08.976862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:05:08.976869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.976874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.976879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.976883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.976891 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:08.976898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:05:08.976903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:05:08.977066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.977073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.977078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.977083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.977092 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:05:08.977141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:05:08.977150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:05:08.977161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.977165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.977170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.977174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.977183 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:05:08.977188 | orchestrator | 2026-03-09 01:05:08.977193 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-09 01:05:08.977198 | orchestrator | Monday 09 March 2026 01:02:22 +0000 (0:00:00.909) 0:00:40.816 ********** 2026-03-09 01:05:08.977202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:05:08.977210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:05:08.977217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.977221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.977226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.977231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.977239 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:05:08.977243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:05:08.977251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:05:08.977258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.977263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.977267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.977272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.977280 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:08.977284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:05:08.977292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:05:08.977297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.977304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.977309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.977314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.977321 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:05:08.977326 | orchestrator | 2026-03-09 01:05:08.977330 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-09 01:05:08.977335 | orchestrator | Monday 09 March 2026 01:02:25 +0000 (0:00:03.092) 0:00:43.908 ********** 2026-03-09 01:05:08.977339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:05:08.977347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:05:08.977356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:05:08.977361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:05:08.977368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:05:08.977373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:05:08.977378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.977385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.977392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.977396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.977401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.977411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.977416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.977421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.977428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.977435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.977440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.977450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.977455 | orchestrator | 2026-03-09 01:05:08.977459 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-09 01:05:08.977464 | orchestrator | Monday 09 March 2026 01:02:33 +0000 (0:00:07.563) 0:00:51.471 ********** 2026-03-09 01:05:08.977468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:05:08.977473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:05:08.977480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:05:08.978075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978192 | orchestrator | 2026-03-09 01:05:08.978196 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-09 01:05:08.978200 | orchestrator | Monday 09 March 2026 01:03:00 +0000 (0:00:27.030) 0:01:18.501 ********** 2026-03-09 01:05:08.978204 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-09 01:05:08.978208 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-09 01:05:08.978212 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-09 01:05:08.978216 | orchestrator | 2026-03-09 01:05:08.978219 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-09 01:05:08.978223 | orchestrator | Monday 09 March 2026 01:03:08 +0000 (0:00:08.043) 0:01:26.545 ********** 2026-03-09 01:05:08.978227 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-09 01:05:08.978231 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-09 01:05:08.978234 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-09 01:05:08.978238 | orchestrator | 2026-03-09 01:05:08.978242 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-09 01:05:08.978246 | orchestrator | Monday 09 March 2026 01:03:12 +0000 (0:00:04.331) 0:01:30.877 ********** 2026-03-09 01:05:08.978250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:05:08.978256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:05:08.978267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:05:08.978271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978347 | orchestrator | 2026-03-09 01:05:08.978351 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-09 01:05:08.978355 | orchestrator | Monday 09 March 2026 01:03:17 +0000 (0:00:04.316) 0:01:35.193 ********** 2026-03-09 01:05:08.978359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:05:08.978363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:05:08.978374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:05:08.978381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978581 | orchestrator | 2026-03-09 01:05:08.978585 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-09 01:05:08.978589 | orchestrator | Monday 09 March 2026 01:03:20 +0000 (0:00:03.443) 0:01:38.637 ********** 2026-03-09 01:05:08.978593 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:08.978597 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:05:08.978601 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:05:08.978605 | orchestrator | 2026-03-09 01:05:08.978609 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-09 01:05:08.978612 | orchestrator | Monday 09 March 2026 01:03:21 +0000 (0:00:00.993) 0:01:39.630 ********** 2026-03-09 01:05:08.978616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:05:08.978620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:05:08.978631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978651 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:08.978655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:05:08.978659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:05:08.978666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978689 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:05:08.978692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-09 01:05:08.978699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-09 01:05:08.978703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:05:08.978731 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:05:08.978737 | orchestrator | 2026-03-09 01:05:08.978741 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-09 01:05:08.978745 | orchestrator | Monday 09 March 2026 01:03:22 +0000 (0:00:01.070) 0:01:40.701 ********** 2026-03-09 01:05:08.978749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:05:08.978757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:05:08.978764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-09 01:05:08.978768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:05:08.978847 | orchestrator | 2026-03-09 01:05:08.978851 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-09 01:05:08.978855 | orchestrator | Monday 09 March 2026 01:03:28 +0000 (0:00:05.855) 0:01:46.557 ********** 2026-03-09 01:05:08.978859 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:08.978863 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:05:08.978867 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:05:08.978870 | orchestrator | 2026-03-09 01:05:08.978874 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-09 01:05:08.978878 | orchestrator | Monday 09 March 2026 01:03:28 +0000 (0:00:00.448) 0:01:47.005 ********** 2026-03-09 01:05:08.978883 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-09 01:05:08.978889 | orchestrator | 2026-03-09 01:05:08.978893 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-09 01:05:08.978897 | orchestrator | Monday 09 March 2026 01:03:31 +0000 (0:00:02.440) 0:01:49.446 ********** 2026-03-09 01:05:08.978901 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 01:05:08.978905 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-09 01:05:08.978909 | orchestrator | 2026-03-09 01:05:08.978913 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-09 01:05:08.978916 | orchestrator | Monday 09 March 2026 01:03:33 +0000 (0:00:02.670) 0:01:52.116 ********** 2026-03-09 01:05:08.978921 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:05:08.978924 | orchestrator | 2026-03-09 01:05:08.978928 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-09 01:05:08.978932 | orchestrator | Monday 09 March 2026 01:03:52 +0000 (0:00:18.236) 0:02:10.353 ********** 2026-03-09 01:05:08.978936 | orchestrator | 2026-03-09 01:05:08.978939 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-09 01:05:08.978943 | orchestrator | Monday 09 March 2026 01:03:52 +0000 (0:00:00.068) 0:02:10.421 ********** 2026-03-09 01:05:08.978947 | orchestrator | 2026-03-09 01:05:08.978951 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-09 01:05:08.978955 | orchestrator | Monday 09 March 2026 01:03:52 +0000 (0:00:00.070) 0:02:10.492 ********** 2026-03-09 01:05:08.978959 | orchestrator | 2026-03-09 01:05:08.978962 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-09 01:05:08.978966 | orchestrator | Monday 09 March 2026 01:03:52 +0000 (0:00:00.066) 0:02:10.559 ********** 2026-03-09 01:05:08.978970 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:05:08.978974 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:05:08.978977 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:05:08.978981 | orchestrator | 2026-03-09 01:05:08.978985 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-09 01:05:08.978989 | orchestrator | Monday 09 March 2026 01:04:07 +0000 (0:00:14.720) 0:02:25.280 ********** 2026-03-09 01:05:08.978993 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:05:08.978997 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:05:08.979001 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:05:08.979004 | orchestrator | 2026-03-09 01:05:08.979008 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-09 01:05:08.979012 | orchestrator | Monday 09 March 2026 01:04:15 +0000 (0:00:07.900) 0:02:33.180 ********** 2026-03-09 01:05:08.979016 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:05:08.979020 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:05:08.979024 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:05:08.979028 | orchestrator | 2026-03-09 01:05:08.979032 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-09 01:05:08.979035 | orchestrator | Monday 09 March 2026 01:04:28 +0000 (0:00:13.422) 0:02:46.602 ********** 2026-03-09 01:05:08.979039 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:05:08.979043 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:05:08.979047 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:05:08.979051 | orchestrator | 2026-03-09 01:05:08.979055 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-09 01:05:08.979058 | orchestrator | Monday 09 March 2026 01:04:40 +0000 (0:00:11.631) 0:02:58.234 ********** 2026-03-09 01:05:08.979062 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:05:08.979066 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:05:08.979070 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:05:08.979073 | orchestrator | 2026-03-09 01:05:08.979077 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-09 01:05:08.979081 | orchestrator | Monday 09 March 2026 01:04:46 +0000 (0:00:06.820) 0:03:05.055 ********** 2026-03-09 01:05:08.979087 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:05:08.979111 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:05:08.979116 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:05:08.979120 | orchestrator | 2026-03-09 01:05:08.979124 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-09 01:05:08.979128 | orchestrator | Monday 09 March 2026 01:04:59 +0000 (0:00:12.197) 0:03:17.253 ********** 2026-03-09 01:05:08.979132 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:05:08.979136 | orchestrator | 2026-03-09 01:05:08.979140 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:05:08.979144 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:05:08.979148 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 01:05:08.979152 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 01:05:08.979156 | orchestrator | 2026-03-09 01:05:08.979160 | orchestrator | 2026-03-09 01:05:08.979166 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:05:08.979170 | orchestrator | Monday 09 March 2026 01:05:06 +0000 (0:00:07.619) 0:03:24.872 ********** 2026-03-09 01:05:08.979174 | orchestrator | =============================================================================== 2026-03-09 01:05:08.979177 | orchestrator | designate : Copying over designate.conf -------------------------------- 27.03s 2026-03-09 01:05:08.979181 | orchestrator | designate : Running Designate bootstrap container ---------------------- 18.24s 2026-03-09 01:05:08.979185 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 14.72s 2026-03-09 01:05:08.979189 | orchestrator | designate : Restart designate-central container ------------------------ 13.42s 2026-03-09 01:05:08.979193 | orchestrator | designate : Restart designate-worker container ------------------------- 12.20s 2026-03-09 01:05:08.979197 | orchestrator | designate : Restart designate-producer container ----------------------- 11.63s 2026-03-09 01:05:08.979201 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 8.04s 2026-03-09 01:05:08.979205 | orchestrator | designate : Restart designate-api container ----------------------------- 7.90s 2026-03-09 01:05:08.979209 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.62s 2026-03-09 01:05:08.979212 | orchestrator | designate : Copying over config.json files for services ----------------- 7.56s 2026-03-09 01:05:08.979216 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.01s 2026-03-09 01:05:08.979220 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.82s 2026-03-09 01:05:08.979224 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.64s 2026-03-09 01:05:08.979228 | orchestrator | designate : Check designate containers ---------------------------------- 5.86s 2026-03-09 01:05:08.979232 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.59s 2026-03-09 01:05:08.979235 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.33s 2026-03-09 01:05:08.979239 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.32s 2026-03-09 01:05:08.979243 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.03s 2026-03-09 01:05:08.979247 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.85s 2026-03-09 01:05:08.979251 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.61s 2026-03-09 01:05:08.979271 | orchestrator | 2026-03-09 01:05:08 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:05:08.979276 | orchestrator | 2026-03-09 01:05:08 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:05:08.979280 | orchestrator | 2026-03-09 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:12.023984 | orchestrator | 2026-03-09 01:05:12 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:05:12.024713 | orchestrator | 2026-03-09 01:05:12 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:05:12.026253 | orchestrator | 2026-03-09 01:05:12 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:05:12.026636 | orchestrator | 2026-03-09 01:05:12 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:05:12.026703 | orchestrator | 2026-03-09 01:05:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:15.066618 | orchestrator | 2026-03-09 01:05:15 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:05:15.069082 | orchestrator | 2026-03-09 01:05:15 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:05:15.071012 | orchestrator | 2026-03-09 01:05:15 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:05:15.072878 | orchestrator | 2026-03-09 01:05:15 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:05:15.072935 | orchestrator | 2026-03-09 01:05:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:18.110785 | orchestrator | 2026-03-09 01:05:18 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:05:18.112505 | orchestrator | 2026-03-09 01:05:18 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state STARTED 2026-03-09 01:05:18.114221 | orchestrator | 2026-03-09 01:05:18 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:05:18.116767 | orchestrator | 2026-03-09 01:05:18 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:05:18.116811 | orchestrator | 2026-03-09 01:05:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:21.192323 | orchestrator | 2026-03-09 01:05:21 | INFO  | Task d2f75e88-d7fd-479b-9245-bc62ca240e2c is in state STARTED 2026-03-09 01:05:21.195128 | orchestrator | 2026-03-09 01:05:21 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:05:21.197517 | orchestrator | 2026-03-09 01:05:21 | INFO  | Task 72ce4cd7-7ee4-4ca3-92c8-37c5a522d6a1 is in state SUCCESS 2026-03-09 01:05:21.199104 | orchestrator | 2026-03-09 01:05:21.199213 | orchestrator | 2026-03-09 01:05:21.199228 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:05:21.199242 | orchestrator | 2026-03-09 01:05:21.199253 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:05:21.199265 | orchestrator | Monday 09 March 2026 01:04:02 +0000 (0:00:00.305) 0:00:00.305 ********** 2026-03-09 01:05:21.199277 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:05:21.199290 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:05:21.199301 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:05:21.199312 | orchestrator | 2026-03-09 01:05:21.199324 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:05:21.199335 | orchestrator | Monday 09 March 2026 01:04:03 +0000 (0:00:00.608) 0:00:00.913 ********** 2026-03-09 01:05:21.199347 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-09 01:05:21.199358 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-09 01:05:21.199370 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-09 01:05:21.199381 | orchestrator | 2026-03-09 01:05:21.199392 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-09 01:05:21.199403 | orchestrator | 2026-03-09 01:05:21.199415 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-09 01:05:21.199426 | orchestrator | Monday 09 March 2026 01:04:04 +0000 (0:00:00.978) 0:00:01.892 ********** 2026-03-09 01:05:21.199465 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:05:21.199478 | orchestrator | 2026-03-09 01:05:21.199489 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-09 01:05:21.199501 | orchestrator | Monday 09 March 2026 01:04:05 +0000 (0:00:01.146) 0:00:03.038 ********** 2026-03-09 01:05:21.199512 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-09 01:05:21.199523 | orchestrator | 2026-03-09 01:05:21.199534 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-09 01:05:21.199545 | orchestrator | Monday 09 March 2026 01:04:09 +0000 (0:00:04.064) 0:00:07.102 ********** 2026-03-09 01:05:21.199556 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-09 01:05:21.199567 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-09 01:05:21.199579 | orchestrator | 2026-03-09 01:05:21.199590 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-09 01:05:21.199601 | orchestrator | Monday 09 March 2026 01:04:17 +0000 (0:00:07.347) 0:00:14.450 ********** 2026-03-09 01:05:21.199613 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-09 01:05:21.199624 | orchestrator | 2026-03-09 01:05:21.199635 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-09 01:05:21.199646 | orchestrator | Monday 09 March 2026 01:04:20 +0000 (0:00:03.791) 0:00:18.242 ********** 2026-03-09 01:05:21.199660 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-09 01:05:21.199672 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:05:21.199685 | orchestrator | 2026-03-09 01:05:21.199698 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-09 01:05:21.199711 | orchestrator | Monday 09 March 2026 01:04:25 +0000 (0:00:04.309) 0:00:22.551 ********** 2026-03-09 01:05:21.199724 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:05:21.199737 | orchestrator | 2026-03-09 01:05:21.199751 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-09 01:05:21.199763 | orchestrator | Monday 09 March 2026 01:04:29 +0000 (0:00:03.947) 0:00:26.499 ********** 2026-03-09 01:05:21.199776 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-09 01:05:21.199789 | orchestrator | 2026-03-09 01:05:21.199801 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-09 01:05:21.199814 | orchestrator | Monday 09 March 2026 01:04:33 +0000 (0:00:03.997) 0:00:30.496 ********** 2026-03-09 01:05:21.199827 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:21.199841 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:05:21.199854 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:05:21.199867 | orchestrator | 2026-03-09 01:05:21.199886 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-09 01:05:21.199923 | orchestrator | Monday 09 March 2026 01:04:33 +0000 (0:00:00.414) 0:00:30.911 ********** 2026-03-09 01:05:21.199949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:05:21.200029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:05:21.200046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:05:21.200058 | orchestrator | 2026-03-09 01:05:21.200069 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-09 01:05:21.200080 | orchestrator | Monday 09 March 2026 01:04:35 +0000 (0:00:01.710) 0:00:32.622 ********** 2026-03-09 01:05:21.200091 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:21.200103 | orchestrator | 2026-03-09 01:05:21.200143 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-09 01:05:21.200155 | orchestrator | Monday 09 March 2026 01:04:35 +0000 (0:00:00.152) 0:00:32.775 ********** 2026-03-09 01:05:21.200166 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:21.200177 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:05:21.200188 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:05:21.200199 | orchestrator | 2026-03-09 01:05:21.200210 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-09 01:05:21.200221 | orchestrator | Monday 09 March 2026 01:04:36 +0000 (0:00:00.567) 0:00:33.343 ********** 2026-03-09 01:05:21.200232 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:05:21.200243 | orchestrator | 2026-03-09 01:05:21.200255 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-09 01:05:21.200265 | orchestrator | Monday 09 March 2026 01:04:36 +0000 (0:00:00.916) 0:00:34.259 ********** 2026-03-09 01:05:21.200284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:05:21.200314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:05:21.200327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:05:21.200338 | orchestrator | 2026-03-09 01:05:21.200350 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-09 01:05:21.200361 | orchestrator | Monday 09 March 2026 01:04:39 +0000 (0:00:02.086) 0:00:36.346 ********** 2026-03-09 01:05:21.200372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-09 01:05:21.200384 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:21.200400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-09 01:05:21.200419 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:05:21.200437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-09 01:05:21.200450 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:05:21.200460 | orchestrator | 2026-03-09 01:05:21.200471 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-09 01:05:21.200482 | orchestrator | Monday 09 March 2026 01:04:40 +0000 (0:00:00.996) 0:00:37.342 ********** 2026-03-09 01:05:21.200494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-09 01:05:21.200506 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:21.200517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-09 01:05:21.200529 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:05:21.200540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-09 01:05:21.200562 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:05:21.200573 | orchestrator | 2026-03-09 01:05:21.200584 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-09 01:05:21.200596 | orchestrator | Monday 09 March 2026 01:04:41 +0000 (0:00:01.223) 0:00:38.565 ********** 2026-03-09 01:05:21.200613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:05:21.200625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:05:21.200637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:05:21.200649 | orchestrator | 2026-03-09 01:05:21.200660 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-09 01:05:21.200671 | orchestrator | Monday 09 March 2026 01:04:42 +0000 (0:00:01.621) 0:00:40.186 ********** 2026-03-09 01:05:21.200682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:05:21.200706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:05:21.200726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:05:21.200738 | orchestrator | 2026-03-09 01:05:21.200749 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-09 01:05:21.200760 | orchestrator | Monday 09 March 2026 01:04:45 +0000 (0:00:02.911) 0:00:43.098 ********** 2026-03-09 01:05:21.200772 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-09 01:05:21.200783 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-09 01:05:21.200794 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-09 01:05:21.200806 | orchestrator | 2026-03-09 01:05:21.200816 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-09 01:05:21.200827 | orchestrator | Monday 09 March 2026 01:04:47 +0000 (0:00:01.649) 0:00:44.747 ********** 2026-03-09 01:05:21.200839 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:05:21.200850 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:05:21.200861 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:05:21.200879 | orchestrator | 2026-03-09 01:05:21.200898 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-09 01:05:21.200916 | orchestrator | Monday 09 March 2026 01:04:49 +0000 (0:00:01.832) 0:00:46.580 ********** 2026-03-09 01:05:21.200935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-09 01:05:21.200993 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:05:21.201021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-09 01:05:21.201040 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:05:21.201067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-09 01:05:21.201080 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:05:21.201091 | orchestrator | 2026-03-09 01:05:21.201102 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-09 01:05:21.201184 | orchestrator | Monday 09 March 2026 01:04:49 +0000 (0:00:00.634) 0:00:47.214 ********** 2026-03-09 01:05:21.201199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:05:21.201211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:05:21.201238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-09 01:05:21.201250 | orchestrator | 2026-03-09 01:05:21.201261 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-09 01:05:21.201272 | orchestrator | Monday 09 March 2026 01:04:51 +0000 (0:00:01.282) 0:00:48.497 ********** 2026-03-09 01:05:21.201284 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:05:21.201295 | orchestrator | 2026-03-09 01:05:21.201306 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-09 01:05:21.201317 | orchestrator | Monday 09 March 2026 01:04:53 +0000 (0:00:02.740) 0:00:51.238 ********** 2026-03-09 01:05:21.201328 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:05:21.201339 | orchestrator | 2026-03-09 01:05:21.201350 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-09 01:05:21.201361 | orchestrator | Monday 09 March 2026 01:04:56 +0000 (0:00:02.422) 0:00:53.661 ********** 2026-03-09 01:05:21.201373 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:05:21.201384 | orchestrator | 2026-03-09 01:05:21.201395 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-09 01:05:21.201406 | orchestrator | Monday 09 March 2026 01:05:12 +0000 (0:00:16.207) 0:01:09.868 ********** 2026-03-09 01:05:21.201417 | orchestrator | 2026-03-09 01:05:21.201428 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-09 01:05:21.201440 | orchestrator | Monday 09 March 2026 01:05:12 +0000 (0:00:00.077) 0:01:09.945 ********** 2026-03-09 01:05:21.201450 | orchestrator | 2026-03-09 01:05:21.201469 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-09 01:05:21.201481 | orchestrator | Monday 09 March 2026 01:05:12 +0000 (0:00:00.073) 0:01:10.019 ********** 2026-03-09 01:05:21.201492 | orchestrator | 2026-03-09 01:05:21.201503 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-09 01:05:21.201514 | orchestrator | Monday 09 March 2026 01:05:12 +0000 (0:00:00.073) 0:01:10.092 ********** 2026-03-09 01:05:21.201525 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:05:21.201537 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:05:21.201548 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:05:21.201559 | orchestrator | 2026-03-09 01:05:21.201570 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:05:21.201583 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 01:05:21.201595 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-09 01:05:21.201613 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-09 01:05:21.201624 | orchestrator | 2026-03-09 01:05:21.201635 | orchestrator | 2026-03-09 01:05:21.201647 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:05:21.201658 | orchestrator | Monday 09 March 2026 01:05:18 +0000 (0:00:05.786) 0:01:15.879 ********** 2026-03-09 01:05:21.201669 | orchestrator | =============================================================================== 2026-03-09 01:05:21.201680 | orchestrator | placement : Running placement bootstrap container ---------------------- 16.21s 2026-03-09 01:05:21.201691 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.35s 2026-03-09 01:05:21.201702 | orchestrator | placement : Restart placement-api container ----------------------------- 5.79s 2026-03-09 01:05:21.201713 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.31s 2026-03-09 01:05:21.201724 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.06s 2026-03-09 01:05:21.201735 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.00s 2026-03-09 01:05:21.201747 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.95s 2026-03-09 01:05:21.201758 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.79s 2026-03-09 01:05:21.201769 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.91s 2026-03-09 01:05:21.201780 | orchestrator | placement : Creating placement databases -------------------------------- 2.74s 2026-03-09 01:05:21.201791 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.42s 2026-03-09 01:05:21.201802 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.09s 2026-03-09 01:05:21.201813 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.83s 2026-03-09 01:05:21.201825 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.71s 2026-03-09 01:05:21.201836 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.65s 2026-03-09 01:05:21.201847 | orchestrator | placement : Copying over config.json files for services ----------------- 1.62s 2026-03-09 01:05:21.201858 | orchestrator | placement : Check placement containers ---------------------------------- 1.28s 2026-03-09 01:05:21.201872 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.22s 2026-03-09 01:05:21.201891 | orchestrator | placement : include_tasks ----------------------------------------------- 1.15s 2026-03-09 01:05:21.201909 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.00s 2026-03-09 01:05:21.202196 | orchestrator | 2026-03-09 01:05:21 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:05:21.205008 | orchestrator | 2026-03-09 01:05:21 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:05:21.205084 | orchestrator | 2026-03-09 01:05:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:24.263292 | orchestrator | 2026-03-09 01:05:24 | INFO  | Task d2f75e88-d7fd-479b-9245-bc62ca240e2c is in state STARTED 2026-03-09 01:05:24.263369 | orchestrator | 2026-03-09 01:05:24 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:05:24.264028 | orchestrator | 2026-03-09 01:05:24 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:05:24.265222 | orchestrator | 2026-03-09 01:05:24 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:05:24.265266 | orchestrator | 2026-03-09 01:05:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:27.314750 | orchestrator | 2026-03-09 01:05:27 | INFO  | Task d2f75e88-d7fd-479b-9245-bc62ca240e2c is in state SUCCESS 2026-03-09 01:05:27.318778 | orchestrator | 2026-03-09 01:05:27 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:05:27.322930 | orchestrator | 2026-03-09 01:05:27 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:05:27.325231 | orchestrator | 2026-03-09 01:05:27 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:05:27.327603 | orchestrator | 2026-03-09 01:05:27 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:05:27.327762 | orchestrator | 2026-03-09 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:30.381484 | orchestrator | 2026-03-09 01:05:30 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:05:30.383844 | orchestrator | 2026-03-09 01:05:30 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:05:30.386810 | orchestrator | 2026-03-09 01:05:30 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:05:30.387198 | orchestrator | 2026-03-09 01:05:30 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:05:30.387541 | orchestrator | 2026-03-09 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:33.426482 | orchestrator | 2026-03-09 01:05:33 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:05:33.427621 | orchestrator | 2026-03-09 01:05:33 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:05:33.429028 | orchestrator | 2026-03-09 01:05:33 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:05:33.430511 | orchestrator | 2026-03-09 01:05:33 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:05:33.430549 | orchestrator | 2026-03-09 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:36.486670 | orchestrator | 2026-03-09 01:05:36 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:05:36.488749 | orchestrator | 2026-03-09 01:05:36 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:05:36.488806 | orchestrator | 2026-03-09 01:05:36 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:05:36.491223 | orchestrator | 2026-03-09 01:05:36 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:05:36.491259 | orchestrator | 2026-03-09 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:39.529651 | orchestrator | 2026-03-09 01:05:39 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:05:39.533090 | orchestrator | 2026-03-09 01:05:39 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:05:39.535186 | orchestrator | 2026-03-09 01:05:39 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:05:39.535443 | orchestrator | 2026-03-09 01:05:39 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:05:39.536125 | orchestrator | 2026-03-09 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:42.570951 | orchestrator | 2026-03-09 01:05:42 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:05:42.571608 | orchestrator | 2026-03-09 01:05:42 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:05:42.572657 | orchestrator | 2026-03-09 01:05:42 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:05:42.573471 | orchestrator | 2026-03-09 01:05:42 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:05:42.573506 | orchestrator | 2026-03-09 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:45.613916 | orchestrator | 2026-03-09 01:05:45 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:05:45.614210 | orchestrator | 2026-03-09 01:05:45 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:05:45.615402 | orchestrator | 2026-03-09 01:05:45 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:05:45.616043 | orchestrator | 2026-03-09 01:05:45 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:05:45.616122 | orchestrator | 2026-03-09 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:48.666593 | orchestrator | 2026-03-09 01:05:48 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:05:48.669006 | orchestrator | 2026-03-09 01:05:48 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:05:48.670802 | orchestrator | 2026-03-09 01:05:48 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:05:48.673218 | orchestrator | 2026-03-09 01:05:48 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:05:48.673258 | orchestrator | 2026-03-09 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:51.713473 | orchestrator | 2026-03-09 01:05:51 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:05:51.713573 | orchestrator | 2026-03-09 01:05:51 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:05:51.714360 | orchestrator | 2026-03-09 01:05:51 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:05:51.715142 | orchestrator | 2026-03-09 01:05:51 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:05:51.715182 | orchestrator | 2026-03-09 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:54.756224 | orchestrator | 2026-03-09 01:05:54 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:05:54.757094 | orchestrator | 2026-03-09 01:05:54 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:05:54.758109 | orchestrator | 2026-03-09 01:05:54 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:05:54.759213 | orchestrator | 2026-03-09 01:05:54 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:05:54.759752 | orchestrator | 2026-03-09 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:05:57.966754 | orchestrator | 2026-03-09 01:05:57 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:05:57.966800 | orchestrator | 2026-03-09 01:05:57 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:05:57.966806 | orchestrator | 2026-03-09 01:05:57 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:05:57.966810 | orchestrator | 2026-03-09 01:05:57 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:05:57.966815 | orchestrator | 2026-03-09 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:01.000662 | orchestrator | 2026-03-09 01:06:00 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:06:01.003672 | orchestrator | 2026-03-09 01:06:01 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:06:01.003748 | orchestrator | 2026-03-09 01:06:01 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:06:01.004167 | orchestrator | 2026-03-09 01:06:01 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:06:01.004210 | orchestrator | 2026-03-09 01:06:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:04.054749 | orchestrator | 2026-03-09 01:06:04 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:06:04.055430 | orchestrator | 2026-03-09 01:06:04 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:06:04.056598 | orchestrator | 2026-03-09 01:06:04 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:06:04.057462 | orchestrator | 2026-03-09 01:06:04 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:06:04.057553 | orchestrator | 2026-03-09 01:06:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:07.094259 | orchestrator | 2026-03-09 01:06:07 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:06:07.098823 | orchestrator | 2026-03-09 01:06:07 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:06:07.100047 | orchestrator | 2026-03-09 01:06:07 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:06:07.101954 | orchestrator | 2026-03-09 01:06:07 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:06:07.102005 | orchestrator | 2026-03-09 01:06:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:10.153658 | orchestrator | 2026-03-09 01:06:10 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:06:10.155532 | orchestrator | 2026-03-09 01:06:10 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:06:10.156990 | orchestrator | 2026-03-09 01:06:10 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:06:10.159497 | orchestrator | 2026-03-09 01:06:10 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:06:10.159600 | orchestrator | 2026-03-09 01:06:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:13.256732 | orchestrator | 2026-03-09 01:06:13 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:06:13.257534 | orchestrator | 2026-03-09 01:06:13 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:06:13.262591 | orchestrator | 2026-03-09 01:06:13 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:06:13.267302 | orchestrator | 2026-03-09 01:06:13 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:06:13.267360 | orchestrator | 2026-03-09 01:06:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:16.325588 | orchestrator | 2026-03-09 01:06:16 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:06:16.325668 | orchestrator | 2026-03-09 01:06:16 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:06:16.325677 | orchestrator | 2026-03-09 01:06:16 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:06:16.325684 | orchestrator | 2026-03-09 01:06:16 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:06:16.325702 | orchestrator | 2026-03-09 01:06:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:19.363151 | orchestrator | 2026-03-09 01:06:19 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:06:19.366258 | orchestrator | 2026-03-09 01:06:19 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:06:19.368422 | orchestrator | 2026-03-09 01:06:19 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:06:19.371275 | orchestrator | 2026-03-09 01:06:19 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:06:19.371324 | orchestrator | 2026-03-09 01:06:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:22.436665 | orchestrator | 2026-03-09 01:06:22 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:06:22.440410 | orchestrator | 2026-03-09 01:06:22 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:06:22.451656 | orchestrator | 2026-03-09 01:06:22 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:06:22.457974 | orchestrator | 2026-03-09 01:06:22 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:06:22.458092 | orchestrator | 2026-03-09 01:06:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:25.502302 | orchestrator | 2026-03-09 01:06:25 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:06:25.503663 | orchestrator | 2026-03-09 01:06:25 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:06:25.504986 | orchestrator | 2026-03-09 01:06:25 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:06:25.505641 | orchestrator | 2026-03-09 01:06:25 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:06:25.505674 | orchestrator | 2026-03-09 01:06:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:28.615363 | orchestrator | 2026-03-09 01:06:28 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:06:28.619753 | orchestrator | 2026-03-09 01:06:28 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:06:28.622218 | orchestrator | 2026-03-09 01:06:28 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:06:28.626663 | orchestrator | 2026-03-09 01:06:28 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:06:28.627362 | orchestrator | 2026-03-09 01:06:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:31.685010 | orchestrator | 2026-03-09 01:06:31 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:06:31.687446 | orchestrator | 2026-03-09 01:06:31 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:06:31.688750 | orchestrator | 2026-03-09 01:06:31 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:06:31.689362 | orchestrator | 2026-03-09 01:06:31 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:06:31.689395 | orchestrator | 2026-03-09 01:06:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:34.753350 | orchestrator | 2026-03-09 01:06:34 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:06:34.754848 | orchestrator | 2026-03-09 01:06:34 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:06:34.757627 | orchestrator | 2026-03-09 01:06:34 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:06:34.758806 | orchestrator | 2026-03-09 01:06:34 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:06:34.759096 | orchestrator | 2026-03-09 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:37.810588 | orchestrator | 2026-03-09 01:06:37 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:06:37.812045 | orchestrator | 2026-03-09 01:06:37 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:06:37.813974 | orchestrator | 2026-03-09 01:06:37 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:06:37.815771 | orchestrator | 2026-03-09 01:06:37 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:06:37.815822 | orchestrator | 2026-03-09 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:40.861022 | orchestrator | 2026-03-09 01:06:40 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:06:40.862986 | orchestrator | 2026-03-09 01:06:40 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:06:40.865749 | orchestrator | 2026-03-09 01:06:40 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:06:40.868648 | orchestrator | 2026-03-09 01:06:40 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:06:40.868795 | orchestrator | 2026-03-09 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:43.904425 | orchestrator | 2026-03-09 01:06:43 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:06:43.906314 | orchestrator | 2026-03-09 01:06:43 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:06:43.906360 | orchestrator | 2026-03-09 01:06:43 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:06:43.908380 | orchestrator | 2026-03-09 01:06:43 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:06:43.908418 | orchestrator | 2026-03-09 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:46.950005 | orchestrator | 2026-03-09 01:06:46 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:06:46.950789 | orchestrator | 2026-03-09 01:06:46 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:06:46.951910 | orchestrator | 2026-03-09 01:06:46 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:06:46.952754 | orchestrator | 2026-03-09 01:06:46 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:06:46.952971 | orchestrator | 2026-03-09 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:50.004556 | orchestrator | 2026-03-09 01:06:50 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:06:50.006948 | orchestrator | 2026-03-09 01:06:50 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:06:50.009079 | orchestrator | 2026-03-09 01:06:50 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:06:50.012343 | orchestrator | 2026-03-09 01:06:50 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:06:50.012391 | orchestrator | 2026-03-09 01:06:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:53.057460 | orchestrator | 2026-03-09 01:06:53 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:06:53.060588 | orchestrator | 2026-03-09 01:06:53 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:06:53.061469 | orchestrator | 2026-03-09 01:06:53 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:06:53.062449 | orchestrator | 2026-03-09 01:06:53 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:06:53.062477 | orchestrator | 2026-03-09 01:06:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:56.114863 | orchestrator | 2026-03-09 01:06:56 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:06:56.118287 | orchestrator | 2026-03-09 01:06:56 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:06:56.121119 | orchestrator | 2026-03-09 01:06:56 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:06:56.124489 | orchestrator | 2026-03-09 01:06:56 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:06:56.124559 | orchestrator | 2026-03-09 01:06:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:06:59.164619 | orchestrator | 2026-03-09 01:06:59 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:06:59.164904 | orchestrator | 2026-03-09 01:06:59 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:06:59.166140 | orchestrator | 2026-03-09 01:06:59 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:06:59.169168 | orchestrator | 2026-03-09 01:06:59 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:06:59.169217 | orchestrator | 2026-03-09 01:06:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:02.233718 | orchestrator | 2026-03-09 01:07:02 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:07:02.238406 | orchestrator | 2026-03-09 01:07:02 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:07:02.242556 | orchestrator | 2026-03-09 01:07:02 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:07:02.245995 | orchestrator | 2026-03-09 01:07:02 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:07:02.246895 | orchestrator | 2026-03-09 01:07:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:05.286418 | orchestrator | 2026-03-09 01:07:05 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:07:05.286515 | orchestrator | 2026-03-09 01:07:05 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:07:05.288000 | orchestrator | 2026-03-09 01:07:05 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:07:05.288051 | orchestrator | 2026-03-09 01:07:05 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:07:05.288065 | orchestrator | 2026-03-09 01:07:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:08.328909 | orchestrator | 2026-03-09 01:07:08 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:07:08.330958 | orchestrator | 2026-03-09 01:07:08 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:07:08.331970 | orchestrator | 2026-03-09 01:07:08 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:07:08.333402 | orchestrator | 2026-03-09 01:07:08 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:07:08.333712 | orchestrator | 2026-03-09 01:07:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:11.366845 | orchestrator | 2026-03-09 01:07:11 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:07:11.368453 | orchestrator | 2026-03-09 01:07:11 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:07:11.369612 | orchestrator | 2026-03-09 01:07:11 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:07:11.372165 | orchestrator | 2026-03-09 01:07:11 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:07:11.372548 | orchestrator | 2026-03-09 01:07:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:14.412341 | orchestrator | 2026-03-09 01:07:14 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:07:14.413422 | orchestrator | 2026-03-09 01:07:14 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:07:14.415117 | orchestrator | 2026-03-09 01:07:14 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:07:14.416441 | orchestrator | 2026-03-09 01:07:14 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:07:14.416504 | orchestrator | 2026-03-09 01:07:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:17.459057 | orchestrator | 2026-03-09 01:07:17 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state STARTED 2026-03-09 01:07:17.461602 | orchestrator | 2026-03-09 01:07:17 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:07:17.463059 | orchestrator | 2026-03-09 01:07:17 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:07:17.465615 | orchestrator | 2026-03-09 01:07:17 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:07:17.465708 | orchestrator | 2026-03-09 01:07:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:20.498393 | orchestrator | 2026-03-09 01:07:20 | INFO  | Task cb1f4ea8-59d5-4e03-b892-7e076954a8aa is in state STARTED 2026-03-09 01:07:20.502426 | orchestrator | 2026-03-09 01:07:20.502557 | orchestrator | 2026-03-09 01:07:20.502580 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:07:20.502600 | orchestrator | 2026-03-09 01:07:20.502616 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:07:20.502634 | orchestrator | Monday 09 March 2026 01:05:23 +0000 (0:00:00.230) 0:00:00.230 ********** 2026-03-09 01:07:20.502652 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:07:20.502671 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:07:20.502688 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:07:20.502704 | orchestrator | 2026-03-09 01:07:20.502723 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:07:20.502741 | orchestrator | Monday 09 March 2026 01:05:24 +0000 (0:00:00.335) 0:00:00.566 ********** 2026-03-09 01:07:20.502759 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-09 01:07:20.502777 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-09 01:07:20.502794 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-09 01:07:20.502810 | orchestrator | 2026-03-09 01:07:20.502828 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-09 01:07:20.502845 | orchestrator | 2026-03-09 01:07:20.502862 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-09 01:07:20.502879 | orchestrator | Monday 09 March 2026 01:05:24 +0000 (0:00:00.724) 0:00:01.291 ********** 2026-03-09 01:07:20.502896 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:07:20.502913 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:07:20.502931 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:07:20.502947 | orchestrator | 2026-03-09 01:07:20.502964 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:07:20.503019 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:07:20.503039 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:07:20.503055 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:07:20.503070 | orchestrator | 2026-03-09 01:07:20.503087 | orchestrator | 2026-03-09 01:07:20.503103 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:07:20.503121 | orchestrator | Monday 09 March 2026 01:05:25 +0000 (0:00:00.786) 0:00:02.077 ********** 2026-03-09 01:07:20.503139 | orchestrator | =============================================================================== 2026-03-09 01:07:20.503156 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.79s 2026-03-09 01:07:20.503174 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.72s 2026-03-09 01:07:20.503192 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-03-09 01:07:20.503209 | orchestrator | 2026-03-09 01:07:20.503534 | orchestrator | 2026-03-09 01:07:20 | INFO  | Task 7cc6efd7-0701-4dbb-b24c-982c01182437 is in state SUCCESS 2026-03-09 01:07:20.504713 | orchestrator | 2026-03-09 01:07:20.504761 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:07:20.504777 | orchestrator | 2026-03-09 01:07:20.504791 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:07:20.504808 | orchestrator | Monday 09 March 2026 01:01:43 +0000 (0:00:00.586) 0:00:00.586 ********** 2026-03-09 01:07:20.504822 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:07:20.504835 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:07:20.504850 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:07:20.504863 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:07:20.504877 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:07:20.504889 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:07:20.504901 | orchestrator | 2026-03-09 01:07:20.504997 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:07:20.505012 | orchestrator | Monday 09 March 2026 01:01:44 +0000 (0:00:00.982) 0:00:01.569 ********** 2026-03-09 01:07:20.505137 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-09 01:07:20.505154 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-09 01:07:20.505169 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-09 01:07:20.505315 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-09 01:07:20.505334 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-09 01:07:20.505348 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-09 01:07:20.505361 | orchestrator | 2026-03-09 01:07:20.505377 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-09 01:07:20.505391 | orchestrator | 2026-03-09 01:07:20.505405 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-09 01:07:20.505420 | orchestrator | Monday 09 March 2026 01:01:44 +0000 (0:00:00.872) 0:00:02.442 ********** 2026-03-09 01:07:20.505490 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:07:20.505510 | orchestrator | 2026-03-09 01:07:20.505525 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-09 01:07:20.505540 | orchestrator | Monday 09 March 2026 01:01:46 +0000 (0:00:01.387) 0:00:03.829 ********** 2026-03-09 01:07:20.505554 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:07:20.505571 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:07:20.505586 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:07:20.505601 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:07:20.505616 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:07:20.505650 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:07:20.505665 | orchestrator | 2026-03-09 01:07:20.505680 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-09 01:07:20.505696 | orchestrator | Monday 09 March 2026 01:01:47 +0000 (0:00:01.399) 0:00:05.228 ********** 2026-03-09 01:07:20.505711 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:07:20.505759 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:07:20.505772 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:07:20.505786 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:07:20.505801 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:07:20.505815 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:07:20.505829 | orchestrator | 2026-03-09 01:07:20.505843 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-09 01:07:20.505857 | orchestrator | Monday 09 March 2026 01:01:49 +0000 (0:00:01.272) 0:00:06.500 ********** 2026-03-09 01:07:20.505871 | orchestrator | ok: [testbed-node-0] => { 2026-03-09 01:07:20.505887 | orchestrator |  "changed": false, 2026-03-09 01:07:20.505903 | orchestrator |  "msg": "All assertions passed" 2026-03-09 01:07:20.505918 | orchestrator | } 2026-03-09 01:07:20.505933 | orchestrator | ok: [testbed-node-1] => { 2026-03-09 01:07:20.505948 | orchestrator |  "changed": false, 2026-03-09 01:07:20.505963 | orchestrator |  "msg": "All assertions passed" 2026-03-09 01:07:20.505978 | orchestrator | } 2026-03-09 01:07:20.505994 | orchestrator | ok: [testbed-node-2] => { 2026-03-09 01:07:20.506007 | orchestrator |  "changed": false, 2026-03-09 01:07:20.506124 | orchestrator |  "msg": "All assertions passed" 2026-03-09 01:07:20.506157 | orchestrator | } 2026-03-09 01:07:20.506171 | orchestrator | ok: [testbed-node-3] => { 2026-03-09 01:07:20.506185 | orchestrator |  "changed": false, 2026-03-09 01:07:20.506198 | orchestrator |  "msg": "All assertions passed" 2026-03-09 01:07:20.506211 | orchestrator | } 2026-03-09 01:07:20.506225 | orchestrator | ok: [testbed-node-4] => { 2026-03-09 01:07:20.506238 | orchestrator |  "changed": false, 2026-03-09 01:07:20.506252 | orchestrator |  "msg": "All assertions passed" 2026-03-09 01:07:20.506290 | orchestrator | } 2026-03-09 01:07:20.506305 | orchestrator | ok: [testbed-node-5] => { 2026-03-09 01:07:20.506318 | orchestrator |  "changed": false, 2026-03-09 01:07:20.506332 | orchestrator |  "msg": "All assertions passed" 2026-03-09 01:07:20.506345 | orchestrator | } 2026-03-09 01:07:20.506358 | orchestrator | 2026-03-09 01:07:20.506371 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-09 01:07:20.506384 | orchestrator | Monday 09 March 2026 01:01:50 +0000 (0:00:01.033) 0:00:07.534 ********** 2026-03-09 01:07:20.506396 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.506408 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.506422 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.506434 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.506446 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.506459 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.506473 | orchestrator | 2026-03-09 01:07:20.506486 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-09 01:07:20.506500 | orchestrator | Monday 09 March 2026 01:01:50 +0000 (0:00:00.691) 0:00:08.225 ********** 2026-03-09 01:07:20.506514 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-09 01:07:20.506527 | orchestrator | 2026-03-09 01:07:20.506540 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-09 01:07:20.506554 | orchestrator | Monday 09 March 2026 01:01:54 +0000 (0:00:03.591) 0:00:11.816 ********** 2026-03-09 01:07:20.506568 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-09 01:07:20.506583 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-09 01:07:20.506595 | orchestrator | 2026-03-09 01:07:20.506635 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-09 01:07:20.506664 | orchestrator | Monday 09 March 2026 01:02:01 +0000 (0:00:07.566) 0:00:19.383 ********** 2026-03-09 01:07:20.506678 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-09 01:07:20.506691 | orchestrator | 2026-03-09 01:07:20.506705 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-09 01:07:20.506729 | orchestrator | Monday 09 March 2026 01:02:05 +0000 (0:00:03.634) 0:00:23.017 ********** 2026-03-09 01:07:20.506742 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-09 01:07:20.506756 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:07:20.506769 | orchestrator | 2026-03-09 01:07:20.506782 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-09 01:07:20.506796 | orchestrator | Monday 09 March 2026 01:02:09 +0000 (0:00:04.047) 0:00:27.064 ********** 2026-03-09 01:07:20.506810 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:07:20.506824 | orchestrator | 2026-03-09 01:07:20.506838 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-09 01:07:20.506852 | orchestrator | Monday 09 March 2026 01:02:13 +0000 (0:00:03.615) 0:00:30.680 ********** 2026-03-09 01:07:20.506866 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-09 01:07:20.506880 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-09 01:07:20.506893 | orchestrator | 2026-03-09 01:07:20.506907 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-09 01:07:20.506920 | orchestrator | Monday 09 March 2026 01:02:21 +0000 (0:00:08.202) 0:00:38.883 ********** 2026-03-09 01:07:20.506933 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.506947 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.506960 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.506972 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.506984 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.506998 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.507010 | orchestrator | 2026-03-09 01:07:20.507022 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-09 01:07:20.507035 | orchestrator | Monday 09 March 2026 01:02:22 +0000 (0:00:00.970) 0:00:39.853 ********** 2026-03-09 01:07:20.507049 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.507062 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.507075 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.507087 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.507100 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.507114 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.507127 | orchestrator | 2026-03-09 01:07:20.507140 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-09 01:07:20.507154 | orchestrator | Monday 09 March 2026 01:02:26 +0000 (0:00:03.838) 0:00:43.692 ********** 2026-03-09 01:07:20.507166 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:07:20.507179 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:07:20.507193 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:07:20.507206 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:07:20.507220 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:07:20.507232 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:07:20.507244 | orchestrator | 2026-03-09 01:07:20.507292 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-09 01:07:20.507307 | orchestrator | Monday 09 March 2026 01:02:27 +0000 (0:00:01.485) 0:00:45.177 ********** 2026-03-09 01:07:20.507320 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.507332 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.507346 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.507357 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.507370 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.507383 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.507396 | orchestrator | 2026-03-09 01:07:20.507409 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-09 01:07:20.507435 | orchestrator | Monday 09 March 2026 01:02:30 +0000 (0:00:02.796) 0:00:47.974 ********** 2026-03-09 01:07:20.507452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:07:20.507497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:07:20.507513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:07:20.507529 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:07:20.507546 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:07:20.507570 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:07:20.507583 | orchestrator | 2026-03-09 01:07:20.507596 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-09 01:07:20.507609 | orchestrator | Monday 09 March 2026 01:02:35 +0000 (0:00:05.197) 0:00:53.171 ********** 2026-03-09 01:07:20.507623 | orchestrator | [WARNING]: Skipped 2026-03-09 01:07:20.507636 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-09 01:07:20.507650 | orchestrator | due to this access issue: 2026-03-09 01:07:20.507663 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-09 01:07:20.507677 | orchestrator | a directory 2026-03-09 01:07:20.507691 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:07:20.507704 | orchestrator | 2026-03-09 01:07:20.507716 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-09 01:07:20.507733 | orchestrator | Monday 09 March 2026 01:02:37 +0000 (0:00:01.444) 0:00:54.615 ********** 2026-03-09 01:07:20.507746 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:07:20.507759 | orchestrator | 2026-03-09 01:07:20.507788 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-09 01:07:20.507800 | orchestrator | Monday 09 March 2026 01:02:39 +0000 (0:00:02.069) 0:00:56.685 ********** 2026-03-09 01:07:20.507811 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:07:20.507824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:07:20.507842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:07:20.507854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:07:20.507878 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:07:20.507891 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:07:20.507903 | orchestrator | 2026-03-09 01:07:20.507915 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-09 01:07:20.507926 | orchestrator | Monday 09 March 2026 01:02:44 +0000 (0:00:05.168) 0:01:01.853 ********** 2026-03-09 01:07:20.507938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:07:20.507957 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.507968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:07:20.507980 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.507992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:07:20.508009 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.508067 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:07:20.508082 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.508093 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:07:20.508114 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.508126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:07:20.508139 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.508150 | orchestrator | 2026-03-09 01:07:20.508163 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-09 01:07:20.508175 | orchestrator | Monday 09 March 2026 01:02:49 +0000 (0:00:05.275) 0:01:07.129 ********** 2026-03-09 01:07:20.508187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:07:20.508198 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.508223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:07:20.508235 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.508247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:07:20.508286 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.508299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:07:20.508311 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.508323 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:07:20.508334 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.508345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:07:20.508356 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.508368 | orchestrator | 2026-03-09 01:07:20.508380 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-09 01:07:20.508390 | orchestrator | Monday 09 March 2026 01:02:55 +0000 (0:00:05.526) 0:01:12.655 ********** 2026-03-09 01:07:20.508401 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.508412 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.508424 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.508435 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.508446 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.508457 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.508472 | orchestrator | 2026-03-09 01:07:20.508485 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-09 01:07:20.508503 | orchestrator | Monday 09 March 2026 01:02:59 +0000 (0:00:04.502) 0:01:17.158 ********** 2026-03-09 01:07:20.508515 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.508527 | orchestrator | 2026-03-09 01:07:20.508538 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-09 01:07:20.508549 | orchestrator | Monday 09 March 2026 01:02:59 +0000 (0:00:00.145) 0:01:17.303 ********** 2026-03-09 01:07:20.508566 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.508578 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.508597 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.508609 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.508621 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.508633 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.508645 | orchestrator | 2026-03-09 01:07:20.508655 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-09 01:07:20.508666 | orchestrator | Monday 09 March 2026 01:03:01 +0000 (0:00:01.230) 0:01:18.534 ********** 2026-03-09 01:07:20.508678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:07:20.508691 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.508702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:07:20.508713 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.508724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:07:20.508736 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.508754 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:07:20.508774 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.508791 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:07:20.508804 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.508814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:07:20.508826 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.508836 | orchestrator | 2026-03-09 01:07:20.508847 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-09 01:07:20.508859 | orchestrator | Monday 09 March 2026 01:03:06 +0000 (0:00:05.296) 0:01:23.830 ********** 2026-03-09 01:07:20.508870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:07:20.508883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:07:20.508907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:07:20.508927 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:07:20.508940 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:07:20.508952 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:07:20.508964 | orchestrator | 2026-03-09 01:07:20.508975 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-09 01:07:20.508986 | orchestrator | Monday 09 March 2026 01:03:11 +0000 (0:00:05.318) 0:01:29.149 ********** 2026-03-09 01:07:20.508998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:07:20.509032 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:07:20.509045 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:07:20.509057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:07:20.509068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:07:20.509080 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:07:20.509098 | orchestrator | 2026-03-09 01:07:20.509108 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-09 01:07:20.509119 | orchestrator | Monday 09 March 2026 01:03:19 +0000 (0:00:07.492) 0:01:36.641 ********** 2026-03-09 01:07:20.509142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:07:20.509153 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.509164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:07:20.509176 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.509188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:07:20.509198 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.509209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:07:20.509226 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.509237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:07:20.509281 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:07:20.509294 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.509304 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.509314 | orchestrator | 2026-03-09 01:07:20.509325 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-09 01:07:20.509336 | orchestrator | Monday 09 March 2026 01:03:22 +0000 (0:00:03.187) 0:01:39.828 ********** 2026-03-09 01:07:20.509346 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.509358 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.509369 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.509380 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:07:20.509391 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:20.509401 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:07:20.509412 | orchestrator | 2026-03-09 01:07:20.509424 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-09 01:07:20.509436 | orchestrator | Monday 09 March 2026 01:03:25 +0000 (0:00:03.435) 0:01:43.264 ********** 2026-03-09 01:07:20.509448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:07:20.509461 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:07:20.509483 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.509495 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.509508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:07:20.509520 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.509545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:07:20.509557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:07:20.509569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:07:20.509579 | orchestrator | 2026-03-09 01:07:20.509590 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-09 01:07:20.509608 | orchestrator | Monday 09 March 2026 01:03:30 +0000 (0:00:04.284) 0:01:47.549 ********** 2026-03-09 01:07:20.509619 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.509630 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.509640 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.509650 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.509660 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.509670 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.509681 | orchestrator | 2026-03-09 01:07:20.509691 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-09 01:07:20.509702 | orchestrator | Monday 09 March 2026 01:03:34 +0000 (0:00:03.933) 0:01:51.482 ********** 2026-03-09 01:07:20.509712 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.509723 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.509734 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.509745 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.509755 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.509765 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.509776 | orchestrator | 2026-03-09 01:07:20.509787 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-09 01:07:20.509798 | orchestrator | Monday 09 March 2026 01:03:37 +0000 (0:00:03.197) 0:01:54.680 ********** 2026-03-09 01:07:20.509808 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.509819 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.509830 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.509841 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.509851 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.509861 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.509872 | orchestrator | 2026-03-09 01:07:20.509883 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-09 01:07:20.509894 | orchestrator | Monday 09 March 2026 01:03:41 +0000 (0:00:03.953) 0:01:58.633 ********** 2026-03-09 01:07:20.509905 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.509916 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.509928 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.509939 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.509950 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.509961 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.509973 | orchestrator | 2026-03-09 01:07:20.509985 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-09 01:07:20.509996 | orchestrator | Monday 09 March 2026 01:03:43 +0000 (0:00:02.584) 0:02:01.217 ********** 2026-03-09 01:07:20.510008 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.510065 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.510079 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.510091 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.510111 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.510122 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.510133 | orchestrator | 2026-03-09 01:07:20.510145 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-09 01:07:20.510156 | orchestrator | Monday 09 March 2026 01:03:46 +0000 (0:00:02.734) 0:02:03.951 ********** 2026-03-09 01:07:20.510174 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.510186 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.510199 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.510212 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.510225 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.510238 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.510251 | orchestrator | 2026-03-09 01:07:20.510288 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-09 01:07:20.510300 | orchestrator | Monday 09 March 2026 01:03:49 +0000 (0:00:03.079) 0:02:07.031 ********** 2026-03-09 01:07:20.510325 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-09 01:07:20.510336 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.510347 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-09 01:07:20.510358 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.510369 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-09 01:07:20.510380 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.510391 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-09 01:07:20.510402 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.510440 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-09 01:07:20.510451 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.510462 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-09 01:07:20.510473 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.510483 | orchestrator | 2026-03-09 01:07:20.510494 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-09 01:07:20.510504 | orchestrator | Monday 09 March 2026 01:03:52 +0000 (0:00:02.934) 0:02:09.966 ********** 2026-03-09 01:07:20.510518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:07:20.510530 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.510542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:07:20.510554 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.510575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:07:20.510594 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.510612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:07:20.510624 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.510634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:07:20.510645 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.510656 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:07:20.510666 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.510691 | orchestrator | 2026-03-09 01:07:20.510702 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-09 01:07:20.510712 | orchestrator | Monday 09 March 2026 01:03:56 +0000 (0:00:04.023) 0:02:13.989 ********** 2026-03-09 01:07:20.510721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:07:20.510732 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.510755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:07:20.510774 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.510784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:07:20.510794 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.510804 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:07:20.510814 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.510824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:07:20.510835 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.510845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:07:20.510861 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.510870 | orchestrator | 2026-03-09 01:07:20.510880 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-09 01:07:20.510890 | orchestrator | Monday 09 March 2026 01:03:59 +0000 (0:00:03.177) 0:02:17.166 ********** 2026-03-09 01:07:20.510899 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.511123 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.511146 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.511156 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.511167 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.511177 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.511187 | orchestrator | 2026-03-09 01:07:20.511208 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-09 01:07:20.511218 | orchestrator | Monday 09 March 2026 01:04:02 +0000 (0:00:03.118) 0:02:20.285 ********** 2026-03-09 01:07:20.511228 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.511238 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.511248 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.511310 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:07:20.511322 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:07:20.511331 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:07:20.511341 | orchestrator | 2026-03-09 01:07:20.511352 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-09 01:07:20.511359 | orchestrator | Monday 09 March 2026 01:04:07 +0000 (0:00:04.895) 0:02:25.181 ********** 2026-03-09 01:07:20.511366 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.511372 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.511381 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.511391 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.511401 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.511411 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.511421 | orchestrator | 2026-03-09 01:07:20.511430 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-09 01:07:20.511440 | orchestrator | Monday 09 March 2026 01:04:11 +0000 (0:00:04.129) 0:02:29.310 ********** 2026-03-09 01:07:20.511450 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.511460 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.511471 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.511481 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.511491 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.511502 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.511509 | orchestrator | 2026-03-09 01:07:20.511515 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-09 01:07:20.511522 | orchestrator | Monday 09 March 2026 01:04:14 +0000 (0:00:02.655) 0:02:31.966 ********** 2026-03-09 01:07:20.511528 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.511534 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.511540 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.511546 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.511552 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.511559 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.511565 | orchestrator | 2026-03-09 01:07:20.511571 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-09 01:07:20.511577 | orchestrator | Monday 09 March 2026 01:04:18 +0000 (0:00:03.996) 0:02:35.962 ********** 2026-03-09 01:07:20.511583 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.511590 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.511596 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.511602 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.511618 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.511625 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.511631 | orchestrator | 2026-03-09 01:07:20.511637 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-09 01:07:20.511643 | orchestrator | Monday 09 March 2026 01:04:21 +0000 (0:00:03.451) 0:02:39.414 ********** 2026-03-09 01:07:20.511649 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.511656 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.511662 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.511668 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.511674 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.511681 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.511687 | orchestrator | 2026-03-09 01:07:20.511693 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-09 01:07:20.511699 | orchestrator | Monday 09 March 2026 01:04:24 +0000 (0:00:02.523) 0:02:41.938 ********** 2026-03-09 01:07:20.511705 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.511712 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.511718 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.511724 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.511730 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.511736 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.511742 | orchestrator | 2026-03-09 01:07:20.511750 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-09 01:07:20.511758 | orchestrator | Monday 09 March 2026 01:04:27 +0000 (0:00:02.542) 0:02:44.480 ********** 2026-03-09 01:07:20.511765 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.511772 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.511779 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.511787 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.511794 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.511802 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.511809 | orchestrator | 2026-03-09 01:07:20.511816 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-09 01:07:20.511823 | orchestrator | Monday 09 March 2026 01:04:29 +0000 (0:00:02.897) 0:02:47.378 ********** 2026-03-09 01:07:20.511832 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-09 01:07:20.511840 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.511847 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-09 01:07:20.511853 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.511860 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-09 01:07:20.511880 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.511886 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-09 01:07:20.511893 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.511906 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-09 01:07:20.511913 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.511920 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-09 01:07:20.511926 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.511933 | orchestrator | 2026-03-09 01:07:20.511945 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-09 01:07:20.511951 | orchestrator | Monday 09 March 2026 01:04:33 +0000 (0:00:03.204) 0:02:50.582 ********** 2026-03-09 01:07:20.511960 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:07:20.511971 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.511977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:07:20.511983 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.511989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:07:20.511995 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.512000 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:07:20.512006 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.512020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-09 01:07:20.512030 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.512036 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-09 01:07:20.512042 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.512047 | orchestrator | 2026-03-09 01:07:20.512053 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-09 01:07:20.512059 | orchestrator | Monday 09 March 2026 01:04:35 +0000 (0:00:02.479) 0:02:53.061 ********** 2026-03-09 01:07:20.512064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:07:20.512071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:07:20.512080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-09 01:07:20.512095 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:07:20.512106 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:07:20.512116 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-09 01:07:20.512125 | orchestrator | 2026-03-09 01:07:20.512135 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-09 01:07:20.512146 | orchestrator | Monday 09 March 2026 01:04:39 +0000 (0:00:03.931) 0:02:56.993 ********** 2026-03-09 01:07:20.512156 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:20.512166 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:20.512176 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:20.512183 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:07:20.512188 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:07:20.512194 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:07:20.512199 | orchestrator | 2026-03-09 01:07:20.512205 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-09 01:07:20.512210 | orchestrator | Monday 09 March 2026 01:04:40 +0000 (0:00:00.680) 0:02:57.673 ********** 2026-03-09 01:07:20.512215 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:20.512221 | orchestrator | 2026-03-09 01:07:20.512226 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-09 01:07:20.512232 | orchestrator | Monday 09 March 2026 01:04:42 +0000 (0:00:02.290) 0:02:59.964 ********** 2026-03-09 01:07:20.512237 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:20.512242 | orchestrator | 2026-03-09 01:07:20.512248 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-09 01:07:20.512253 | orchestrator | Monday 09 March 2026 01:04:44 +0000 (0:00:02.493) 0:03:02.458 ********** 2026-03-09 01:07:20.512298 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:20.512304 | orchestrator | 2026-03-09 01:07:20.512309 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-09 01:07:20.512320 | orchestrator | Monday 09 March 2026 01:05:33 +0000 (0:00:48.240) 0:03:50.698 ********** 2026-03-09 01:07:20.512326 | orchestrator | 2026-03-09 01:07:20.512332 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-09 01:07:20.512337 | orchestrator | Monday 09 March 2026 01:05:33 +0000 (0:00:00.084) 0:03:50.783 ********** 2026-03-09 01:07:20.512342 | orchestrator | 2026-03-09 01:07:20.512348 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-09 01:07:20.512353 | orchestrator | Monday 09 March 2026 01:05:33 +0000 (0:00:00.339) 0:03:51.123 ********** 2026-03-09 01:07:20.512359 | orchestrator | 2026-03-09 01:07:20.512364 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-09 01:07:20.512370 | orchestrator | Monday 09 March 2026 01:05:33 +0000 (0:00:00.091) 0:03:51.214 ********** 2026-03-09 01:07:20.512375 | orchestrator | 2026-03-09 01:07:20.512394 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-09 01:07:20.512403 | orchestrator | Monday 09 March 2026 01:05:33 +0000 (0:00:00.072) 0:03:51.286 ********** 2026-03-09 01:07:20.512413 | orchestrator | 2026-03-09 01:07:20.512422 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-09 01:07:20.512437 | orchestrator | Monday 09 March 2026 01:05:33 +0000 (0:00:00.106) 0:03:51.392 ********** 2026-03-09 01:07:20.512447 | orchestrator | 2026-03-09 01:07:20.512457 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-09 01:07:20.512468 | orchestrator | Monday 09 March 2026 01:05:34 +0000 (0:00:00.088) 0:03:51.481 ********** 2026-03-09 01:07:20.512478 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:20.512488 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:07:20.512497 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:07:20.512506 | orchestrator | 2026-03-09 01:07:20.512515 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-09 01:07:20.512523 | orchestrator | Monday 09 March 2026 01:06:00 +0000 (0:00:26.784) 0:04:18.266 ********** 2026-03-09 01:07:20.512532 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:07:20.512542 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:07:20.512551 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:07:20.512560 | orchestrator | 2026-03-09 01:07:20.512569 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:07:20.512579 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-09 01:07:20.512590 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-09 01:07:20.512599 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-09 01:07:20.512606 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-09 01:07:20.512612 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-09 01:07:20.512617 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-09 01:07:20.512623 | orchestrator | 2026-03-09 01:07:20.512628 | orchestrator | 2026-03-09 01:07:20.512634 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:07:20.512639 | orchestrator | Monday 09 March 2026 01:07:18 +0000 (0:01:17.796) 0:05:36.063 ********** 2026-03-09 01:07:20.512645 | orchestrator | =============================================================================== 2026-03-09 01:07:20.512650 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 77.80s 2026-03-09 01:07:20.512655 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 48.24s 2026-03-09 01:07:20.512667 | orchestrator | neutron : Restart neutron-server container ----------------------------- 26.78s 2026-03-09 01:07:20.512672 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.20s 2026-03-09 01:07:20.512677 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.57s 2026-03-09 01:07:20.512683 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.49s 2026-03-09 01:07:20.512688 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 5.53s 2026-03-09 01:07:20.512694 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.32s 2026-03-09 01:07:20.512699 | orchestrator | neutron : Copying over existing policy file ----------------------------- 5.30s 2026-03-09 01:07:20.512704 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 5.28s 2026-03-09 01:07:20.512710 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 5.20s 2026-03-09 01:07:20.512716 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 5.17s 2026-03-09 01:07:20.512721 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.90s 2026-03-09 01:07:20.512727 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 4.50s 2026-03-09 01:07:20.512732 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.28s 2026-03-09 01:07:20.512737 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 4.13s 2026-03-09 01:07:20.512743 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.05s 2026-03-09 01:07:20.512748 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 4.02s 2026-03-09 01:07:20.512754 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 4.00s 2026-03-09 01:07:20.512759 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 3.95s 2026-03-09 01:07:20.512765 | orchestrator | 2026-03-09 01:07:20 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:07:20.512770 | orchestrator | 2026-03-09 01:07:20 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:07:20.512776 | orchestrator | 2026-03-09 01:07:20 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:07:20.512786 | orchestrator | 2026-03-09 01:07:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:23.559570 | orchestrator | 2026-03-09 01:07:23 | INFO  | Task cb1f4ea8-59d5-4e03-b892-7e076954a8aa is in state STARTED 2026-03-09 01:07:23.560396 | orchestrator | 2026-03-09 01:07:23 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:07:23.561590 | orchestrator | 2026-03-09 01:07:23 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:07:23.562492 | orchestrator | 2026-03-09 01:07:23 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:07:23.562523 | orchestrator | 2026-03-09 01:07:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:26.611834 | orchestrator | 2026-03-09 01:07:26 | INFO  | Task cb1f4ea8-59d5-4e03-b892-7e076954a8aa is in state STARTED 2026-03-09 01:07:26.613368 | orchestrator | 2026-03-09 01:07:26 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state STARTED 2026-03-09 01:07:26.614581 | orchestrator | 2026-03-09 01:07:26 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:07:26.616051 | orchestrator | 2026-03-09 01:07:26 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:07:26.616102 | orchestrator | 2026-03-09 01:07:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:29.650469 | orchestrator | 2026-03-09 01:07:29 | INFO  | Task cb1f4ea8-59d5-4e03-b892-7e076954a8aa is in state STARTED 2026-03-09 01:07:29.656655 | orchestrator | 2026-03-09 01:07:29 | INFO  | Task 6b576874-1d1b-4e5a-b57d-b86d5100b8cc is in state SUCCESS 2026-03-09 01:07:29.658401 | orchestrator | 2026-03-09 01:07:29.658454 | orchestrator | 2026-03-09 01:07:29.658463 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:07:29.658471 | orchestrator | 2026-03-09 01:07:29.658479 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:07:29.658487 | orchestrator | Monday 09 March 2026 01:05:11 +0000 (0:00:00.273) 0:00:00.273 ********** 2026-03-09 01:07:29.658494 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:07:29.658503 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:07:29.658510 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:07:29.658516 | orchestrator | 2026-03-09 01:07:29.658523 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:07:29.658530 | orchestrator | Monday 09 March 2026 01:05:12 +0000 (0:00:00.326) 0:00:00.600 ********** 2026-03-09 01:07:29.658537 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-09 01:07:29.658544 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-09 01:07:29.658551 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-09 01:07:29.658558 | orchestrator | 2026-03-09 01:07:29.658579 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-09 01:07:29.658587 | orchestrator | 2026-03-09 01:07:29.658601 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-09 01:07:29.658608 | orchestrator | Monday 09 March 2026 01:05:12 +0000 (0:00:00.519) 0:00:01.120 ********** 2026-03-09 01:07:29.658616 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:07:29.658624 | orchestrator | 2026-03-09 01:07:29.658631 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-09 01:07:29.658638 | orchestrator | Monday 09 March 2026 01:05:13 +0000 (0:00:00.630) 0:00:01.750 ********** 2026-03-09 01:07:29.658646 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-09 01:07:29.658652 | orchestrator | 2026-03-09 01:07:29.658660 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-09 01:07:29.658666 | orchestrator | Monday 09 March 2026 01:05:17 +0000 (0:00:03.719) 0:00:05.470 ********** 2026-03-09 01:07:29.658674 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-09 01:07:29.658680 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-09 01:07:29.658687 | orchestrator | 2026-03-09 01:07:29.658695 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-09 01:07:29.658702 | orchestrator | Monday 09 March 2026 01:05:24 +0000 (0:00:07.104) 0:00:12.574 ********** 2026-03-09 01:07:29.658709 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-09 01:07:29.658716 | orchestrator | 2026-03-09 01:07:29.658722 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-09 01:07:29.658727 | orchestrator | Monday 09 March 2026 01:05:27 +0000 (0:00:03.732) 0:00:16.307 ********** 2026-03-09 01:07:29.658734 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-09 01:07:29.658740 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:07:29.658746 | orchestrator | 2026-03-09 01:07:29.658751 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-09 01:07:29.658757 | orchestrator | Monday 09 March 2026 01:05:32 +0000 (0:00:04.255) 0:00:20.562 ********** 2026-03-09 01:07:29.658763 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:07:29.658768 | orchestrator | 2026-03-09 01:07:29.658774 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-09 01:07:29.658787 | orchestrator | Monday 09 March 2026 01:05:36 +0000 (0:00:04.210) 0:00:24.772 ********** 2026-03-09 01:07:29.658817 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-09 01:07:29.658823 | orchestrator | 2026-03-09 01:07:29.658829 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-09 01:07:29.658835 | orchestrator | Monday 09 March 2026 01:05:40 +0000 (0:00:04.402) 0:00:29.175 ********** 2026-03-09 01:07:29.658841 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:29.658846 | orchestrator | 2026-03-09 01:07:29.658860 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-09 01:07:29.658865 | orchestrator | Monday 09 March 2026 01:05:44 +0000 (0:00:03.713) 0:00:32.889 ********** 2026-03-09 01:07:29.658871 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:29.658877 | orchestrator | 2026-03-09 01:07:29.658882 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-09 01:07:29.658888 | orchestrator | Monday 09 March 2026 01:05:48 +0000 (0:00:04.255) 0:00:37.145 ********** 2026-03-09 01:07:29.658894 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:29.658899 | orchestrator | 2026-03-09 01:07:29.658905 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-09 01:07:29.658910 | orchestrator | Monday 09 March 2026 01:05:52 +0000 (0:00:03.725) 0:00:40.870 ********** 2026-03-09 01:07:29.658932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:07:29.658942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:07:29.658948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:07:29.658961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:07:29.658972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:07:29.658983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:07:29.658990 | orchestrator | 2026-03-09 01:07:29.658995 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-09 01:07:29.659001 | orchestrator | Monday 09 March 2026 01:05:54 +0000 (0:00:01.810) 0:00:42.680 ********** 2026-03-09 01:07:29.659006 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:29.659012 | orchestrator | 2026-03-09 01:07:29.659018 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-09 01:07:29.659023 | orchestrator | Monday 09 March 2026 01:05:54 +0000 (0:00:00.386) 0:00:43.067 ********** 2026-03-09 01:07:29.659030 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:29.659036 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:29.659043 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:29.659050 | orchestrator | 2026-03-09 01:07:29.659056 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-09 01:07:29.659062 | orchestrator | Monday 09 March 2026 01:05:55 +0000 (0:00:00.977) 0:00:44.044 ********** 2026-03-09 01:07:29.659068 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:07:29.659075 | orchestrator | 2026-03-09 01:07:29.659081 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-09 01:07:29.659087 | orchestrator | Monday 09 March 2026 01:05:57 +0000 (0:00:01.483) 0:00:45.528 ********** 2026-03-09 01:07:29.659093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:07:29.659108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:07:29.659118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:07:29.659131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:07:29.659138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:07:29.659144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:07:29.659154 | orchestrator | 2026-03-09 01:07:29.659160 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-09 01:07:29.659166 | orchestrator | Monday 09 March 2026 01:05:59 +0000 (0:00:02.690) 0:00:48.218 ********** 2026-03-09 01:07:29.659172 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:07:29.659179 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:07:29.659185 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:07:29.659191 | orchestrator | 2026-03-09 01:07:29.659197 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-09 01:07:29.659203 | orchestrator | Monday 09 March 2026 01:06:00 +0000 (0:00:00.422) 0:00:48.641 ********** 2026-03-09 01:07:29.659209 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:07:29.659215 | orchestrator | 2026-03-09 01:07:29.659221 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-09 01:07:29.659227 | orchestrator | Monday 09 March 2026 01:06:01 +0000 (0:00:01.067) 0:00:49.708 ********** 2026-03-09 01:07:29.659236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:07:29.659247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:07:29.659253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:07:29.659332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:07:29.659341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:07:29.659352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:07:29.659359 | orchestrator | 2026-03-09 01:07:29.659366 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-09 01:07:29.659372 | orchestrator | Monday 09 March 2026 01:06:04 +0000 (0:00:03.301) 0:00:53.010 ********** 2026-03-09 01:07:29.659383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-09 01:07:29.659390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:07:29.659401 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:29.659408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-09 01:07:29.659414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:07:29.659421 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:29.659431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-09 01:07:29.659443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:07:29.659449 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:29.659456 | orchestrator | 2026-03-09 01:07:29.659462 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-09 01:07:29.659471 | orchestrator | Monday 09 March 2026 01:06:07 +0000 (0:00:02.859) 0:00:55.869 ********** 2026-03-09 01:07:29.659478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-09 01:07:29.659484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:07:29.659491 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:29.659500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-09 01:07:29.659507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:07:29.659513 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:29.659525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-09 01:07:29.659538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:07:29.659545 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:29.659551 | orchestrator | 2026-03-09 01:07:29.659557 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-09 01:07:29.659564 | orchestrator | Monday 09 March 2026 01:06:12 +0000 (0:00:05.273) 0:01:01.143 ********** 2026-03-09 01:07:29.659570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:07:29.659580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:07:29.659591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:07:29.659602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:07:29.659608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:07:29.659614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:07:29.659620 | orchestrator | 2026-03-09 01:07:29.659626 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-09 01:07:29.659632 | orchestrator | Monday 09 March 2026 01:06:18 +0000 (0:00:05.369) 0:01:06.512 ********** 2026-03-09 01:07:29.659643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:07:29.659653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:07:29.659663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:07:29.659669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:07:29.659676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:07:29.659688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:07:29.659694 | orchestrator | 2026-03-09 01:07:29.659700 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-09 01:07:29.659707 | orchestrator | Monday 09 March 2026 01:06:31 +0000 (0:00:13.702) 0:01:20.214 ********** 2026-03-09 01:07:29.659722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-09 01:07:29.659728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:07:29.659735 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:29.659741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-09 01:07:29.659748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:07:29.659754 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:29.659763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-09 01:07:29.659778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:07:29.659784 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:29.659790 | orchestrator | 2026-03-09 01:07:29.659796 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-09 01:07:29.659802 | orchestrator | Monday 09 March 2026 01:06:32 +0000 (0:00:00.644) 0:01:20.859 ********** 2026-03-09 01:07:29.659808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:07:29.659814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:07:29.659824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-09 01:07:29.659835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:07:29.659845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:07:29.659851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:07:29.659857 | orchestrator | 2026-03-09 01:07:29.659863 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-09 01:07:29.659869 | orchestrator | Monday 09 March 2026 01:06:35 +0000 (0:00:02.527) 0:01:23.387 ********** 2026-03-09 01:07:29.659876 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:07:29.659882 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:07:29.659887 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:07:29.659894 | orchestrator | 2026-03-09 01:07:29.659900 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-09 01:07:29.659906 | orchestrator | Monday 09 March 2026 01:06:35 +0000 (0:00:00.316) 0:01:23.703 ********** 2026-03-09 01:07:29.659912 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:29.659918 | orchestrator | 2026-03-09 01:07:29.659924 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-09 01:07:29.659930 | orchestrator | Monday 09 March 2026 01:06:37 +0000 (0:00:02.318) 0:01:26.022 ********** 2026-03-09 01:07:29.659936 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:29.659942 | orchestrator | 2026-03-09 01:07:29.659948 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-09 01:07:29.659954 | orchestrator | Monday 09 March 2026 01:06:40 +0000 (0:00:02.714) 0:01:28.737 ********** 2026-03-09 01:07:29.659960 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:29.659965 | orchestrator | 2026-03-09 01:07:29.659972 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-09 01:07:29.659979 | orchestrator | Monday 09 March 2026 01:06:57 +0000 (0:00:16.872) 0:01:45.610 ********** 2026-03-09 01:07:29.659985 | orchestrator | 2026-03-09 01:07:29.659996 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-09 01:07:29.660002 | orchestrator | Monday 09 March 2026 01:06:57 +0000 (0:00:00.075) 0:01:45.685 ********** 2026-03-09 01:07:29.660008 | orchestrator | 2026-03-09 01:07:29.660014 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-09 01:07:29.660020 | orchestrator | Monday 09 March 2026 01:06:57 +0000 (0:00:00.071) 0:01:45.757 ********** 2026-03-09 01:07:29.660026 | orchestrator | 2026-03-09 01:07:29.660032 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-09 01:07:29.660038 | orchestrator | Monday 09 March 2026 01:06:57 +0000 (0:00:00.070) 0:01:45.827 ********** 2026-03-09 01:07:29.660044 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:29.660050 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:07:29.660060 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:07:29.660066 | orchestrator | 2026-03-09 01:07:29.660073 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-09 01:07:29.660079 | orchestrator | Monday 09 March 2026 01:07:16 +0000 (0:00:18.565) 0:02:04.393 ********** 2026-03-09 01:07:29.660086 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:07:29.660092 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:07:29.660099 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:07:29.660104 | orchestrator | 2026-03-09 01:07:29.660111 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:07:29.660117 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-09 01:07:29.660125 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-09 01:07:29.660132 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-09 01:07:29.660139 | orchestrator | 2026-03-09 01:07:29.660145 | orchestrator | 2026-03-09 01:07:29.660236 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:07:29.660247 | orchestrator | Monday 09 March 2026 01:07:28 +0000 (0:00:12.231) 0:02:16.625 ********** 2026-03-09 01:07:29.660253 | orchestrator | =============================================================================== 2026-03-09 01:07:29.660260 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 18.57s 2026-03-09 01:07:29.660293 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.87s 2026-03-09 01:07:29.660299 | orchestrator | magnum : Copying over magnum.conf -------------------------------------- 13.70s 2026-03-09 01:07:29.660305 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 12.23s 2026-03-09 01:07:29.660311 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.10s 2026-03-09 01:07:29.660317 | orchestrator | magnum : Copying over config.json files for services -------------------- 5.37s 2026-03-09 01:07:29.660323 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 5.27s 2026-03-09 01:07:29.660329 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.40s 2026-03-09 01:07:29.660335 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.26s 2026-03-09 01:07:29.660341 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.26s 2026-03-09 01:07:29.660347 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 4.21s 2026-03-09 01:07:29.660353 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.73s 2026-03-09 01:07:29.660359 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.73s 2026-03-09 01:07:29.660364 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.72s 2026-03-09 01:07:29.660370 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.71s 2026-03-09 01:07:29.660382 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.30s 2026-03-09 01:07:29.660388 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS certificate --- 2.86s 2026-03-09 01:07:29.660394 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.71s 2026-03-09 01:07:29.660399 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.69s 2026-03-09 01:07:29.660405 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.53s 2026-03-09 01:07:29.674737 | orchestrator | 2026-03-09 01:07:29 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:07:29.675592 | orchestrator | 2026-03-09 01:07:29 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:07:29.675615 | orchestrator | 2026-03-09 01:07:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:32.729399 | orchestrator | 2026-03-09 01:07:32 | INFO  | Task cb1f4ea8-59d5-4e03-b892-7e076954a8aa is in state STARTED 2026-03-09 01:07:32.729467 | orchestrator | 2026-03-09 01:07:32 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:07:32.730108 | orchestrator | 2026-03-09 01:07:32 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:07:32.731536 | orchestrator | 2026-03-09 01:07:32 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:07:32.731583 | orchestrator | 2026-03-09 01:07:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:35.797390 | orchestrator | 2026-03-09 01:07:35 | INFO  | Task cb1f4ea8-59d5-4e03-b892-7e076954a8aa is in state STARTED 2026-03-09 01:07:35.799577 | orchestrator | 2026-03-09 01:07:35 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:07:35.800441 | orchestrator | 2026-03-09 01:07:35 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:07:35.804556 | orchestrator | 2026-03-09 01:07:35 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:07:35.806505 | orchestrator | 2026-03-09 01:07:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:38.866467 | orchestrator | 2026-03-09 01:07:38 | INFO  | Task cb1f4ea8-59d5-4e03-b892-7e076954a8aa is in state STARTED 2026-03-09 01:07:38.866569 | orchestrator | 2026-03-09 01:07:38 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:07:38.868731 | orchestrator | 2026-03-09 01:07:38 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:07:38.871264 | orchestrator | 2026-03-09 01:07:38 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:07:38.871345 | orchestrator | 2026-03-09 01:07:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:41.910146 | orchestrator | 2026-03-09 01:07:41 | INFO  | Task cb1f4ea8-59d5-4e03-b892-7e076954a8aa is in state STARTED 2026-03-09 01:07:41.911092 | orchestrator | 2026-03-09 01:07:41 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:07:41.912833 | orchestrator | 2026-03-09 01:07:41 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:07:41.914175 | orchestrator | 2026-03-09 01:07:41 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:07:41.914196 | orchestrator | 2026-03-09 01:07:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:44.957962 | orchestrator | 2026-03-09 01:07:44 | INFO  | Task cb1f4ea8-59d5-4e03-b892-7e076954a8aa is in state STARTED 2026-03-09 01:07:44.960446 | orchestrator | 2026-03-09 01:07:44 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:07:44.962195 | orchestrator | 2026-03-09 01:07:44 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:07:44.965267 | orchestrator | 2026-03-09 01:07:44 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:07:44.965363 | orchestrator | 2026-03-09 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:48.075626 | orchestrator | 2026-03-09 01:07:48 | INFO  | Task cb1f4ea8-59d5-4e03-b892-7e076954a8aa is in state STARTED 2026-03-09 01:07:48.075694 | orchestrator | 2026-03-09 01:07:48 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:07:48.075701 | orchestrator | 2026-03-09 01:07:48 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state STARTED 2026-03-09 01:07:48.075707 | orchestrator | 2026-03-09 01:07:48 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:07:48.075712 | orchestrator | 2026-03-09 01:07:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:51.139668 | orchestrator | 2026-03-09 01:07:51 | INFO  | Task cb1f4ea8-59d5-4e03-b892-7e076954a8aa is in state STARTED 2026-03-09 01:07:51.139751 | orchestrator | 2026-03-09 01:07:51 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:07:51.139763 | orchestrator | 2026-03-09 01:07:51 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:07:51.139771 | orchestrator | 2026-03-09 01:07:51 | INFO  | Task 67df5c99-4244-488e-92d4-0c6a446f1a93 is in state SUCCESS 2026-03-09 01:07:51.139779 | orchestrator | 2026-03-09 01:07:51 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:07:51.139788 | orchestrator | 2026-03-09 01:07:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:54.247212 | orchestrator | 2026-03-09 01:07:54 | INFO  | Task cb1f4ea8-59d5-4e03-b892-7e076954a8aa is in state STARTED 2026-03-09 01:07:54.247557 | orchestrator | 2026-03-09 01:07:54 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:07:54.247583 | orchestrator | 2026-03-09 01:07:54 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:07:54.247595 | orchestrator | 2026-03-09 01:07:54 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:07:54.247606 | orchestrator | 2026-03-09 01:07:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:07:57.241373 | orchestrator | 2026-03-09 01:07:57 | INFO  | Task cb1f4ea8-59d5-4e03-b892-7e076954a8aa is in state STARTED 2026-03-09 01:07:57.241520 | orchestrator | 2026-03-09 01:07:57 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:07:57.244253 | orchestrator | 2026-03-09 01:07:57 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:07:57.246208 | orchestrator | 2026-03-09 01:07:57 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:07:57.246243 | orchestrator | 2026-03-09 01:07:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:00.300245 | orchestrator | 2026-03-09 01:08:00 | INFO  | Task cb1f4ea8-59d5-4e03-b892-7e076954a8aa is in state STARTED 2026-03-09 01:08:00.301715 | orchestrator | 2026-03-09 01:08:00 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:08:00.304454 | orchestrator | 2026-03-09 01:08:00 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:08:00.306415 | orchestrator | 2026-03-09 01:08:00 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:08:00.306483 | orchestrator | 2026-03-09 01:08:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:03.353110 | orchestrator | 2026-03-09 01:08:03 | INFO  | Task cb1f4ea8-59d5-4e03-b892-7e076954a8aa is in state STARTED 2026-03-09 01:08:03.357164 | orchestrator | 2026-03-09 01:08:03 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:08:03.357945 | orchestrator | 2026-03-09 01:08:03 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:08:03.360520 | orchestrator | 2026-03-09 01:08:03 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:08:03.360590 | orchestrator | 2026-03-09 01:08:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:06.397440 | orchestrator | 2026-03-09 01:08:06 | INFO  | Task cb1f4ea8-59d5-4e03-b892-7e076954a8aa is in state STARTED 2026-03-09 01:08:06.398008 | orchestrator | 2026-03-09 01:08:06 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:08:06.399831 | orchestrator | 2026-03-09 01:08:06 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:08:06.400572 | orchestrator | 2026-03-09 01:08:06 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:08:06.400616 | orchestrator | 2026-03-09 01:08:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:09.433547 | orchestrator | 2026-03-09 01:08:09 | INFO  | Task cb1f4ea8-59d5-4e03-b892-7e076954a8aa is in state SUCCESS 2026-03-09 01:08:09.433625 | orchestrator | 2026-03-09 01:08:09.433635 | orchestrator | 2026-03-09 01:08:09.433644 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-09 01:08:09.433651 | orchestrator | 2026-03-09 01:08:09.433659 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-09 01:08:09.433666 | orchestrator | Monday 09 March 2026 01:01:41 +0000 (0:00:00.113) 0:00:00.115 ********** 2026-03-09 01:08:09.433674 | orchestrator | changed: [localhost] 2026-03-09 01:08:09.433682 | orchestrator | 2026-03-09 01:08:09.433689 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-09 01:08:09.433696 | orchestrator | Monday 09 March 2026 01:01:43 +0000 (0:00:01.544) 0:00:01.660 ********** 2026-03-09 01:08:09.433703 | orchestrator | 2026-03-09 01:08:09.433710 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-09 01:08:09.433717 | orchestrator | 2026-03-09 01:08:09.433723 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-09 01:08:09.433730 | orchestrator | 2026-03-09 01:08:09.433737 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-09 01:08:09.433744 | orchestrator | 2026-03-09 01:08:09.433751 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-09 01:08:09.433758 | orchestrator | 2026-03-09 01:08:09.433764 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-09 01:08:09.433771 | orchestrator | 2026-03-09 01:08:09.433778 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-09 01:08:09.433785 | orchestrator | 2026-03-09 01:08:09.433791 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-09 01:08:09.433798 | orchestrator | changed: [localhost] 2026-03-09 01:08:09.433805 | orchestrator | 2026-03-09 01:08:09.433812 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-09 01:08:09.433819 | orchestrator | Monday 09 March 2026 01:07:33 +0000 (0:05:49.791) 0:05:51.452 ********** 2026-03-09 01:08:09.433826 | orchestrator | changed: [localhost] 2026-03-09 01:08:09.433833 | orchestrator | 2026-03-09 01:08:09.433840 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:08:09.433847 | orchestrator | 2026-03-09 01:08:09.433853 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:08:09.433881 | orchestrator | Monday 09 March 2026 01:07:47 +0000 (0:00:14.642) 0:06:06.094 ********** 2026-03-09 01:08:09.433888 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:08:09.433895 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:08:09.433902 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:08:09.433908 | orchestrator | 2026-03-09 01:08:09.433915 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:08:09.433922 | orchestrator | Monday 09 March 2026 01:07:48 +0000 (0:00:00.398) 0:06:06.493 ********** 2026-03-09 01:08:09.433929 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-03-09 01:08:09.433936 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-03-09 01:08:09.433943 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-03-09 01:08:09.433950 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-03-09 01:08:09.433956 | orchestrator | 2026-03-09 01:08:09.433976 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-03-09 01:08:09.433983 | orchestrator | skipping: no hosts matched 2026-03-09 01:08:09.433990 | orchestrator | 2026-03-09 01:08:09.433997 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:08:09.434004 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:08:09.434013 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:08:09.434087 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:08:09.434101 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:08:09.434112 | orchestrator | 2026-03-09 01:08:09.434123 | orchestrator | 2026-03-09 01:08:09.434134 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:08:09.434146 | orchestrator | Monday 09 March 2026 01:07:48 +0000 (0:00:00.858) 0:06:07.352 ********** 2026-03-09 01:08:09.434158 | orchestrator | =============================================================================== 2026-03-09 01:08:09.434170 | orchestrator | Download ironic-agent initramfs --------------------------------------- 349.79s 2026-03-09 01:08:09.434183 | orchestrator | Download ironic-agent kernel ------------------------------------------- 14.64s 2026-03-09 01:08:09.434195 | orchestrator | Ensure the destination directory exists --------------------------------- 1.55s 2026-03-09 01:08:09.434207 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.86s 2026-03-09 01:08:09.434217 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.40s 2026-03-09 01:08:09.434230 | orchestrator | 2026-03-09 01:08:09.434243 | orchestrator | 2026-03-09 01:08:09.434256 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:08:09.434269 | orchestrator | 2026-03-09 01:08:09.434282 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:08:09.434295 | orchestrator | Monday 09 March 2026 01:07:24 +0000 (0:00:00.336) 0:00:00.336 ********** 2026-03-09 01:08:09.434308 | orchestrator | ok: [testbed-manager] 2026-03-09 01:08:09.434343 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:08:09.434352 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:08:09.434360 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:08:09.434368 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:08:09.434376 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:08:09.434384 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:08:09.434392 | orchestrator | 2026-03-09 01:08:09.434417 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:08:09.434425 | orchestrator | Monday 09 March 2026 01:07:25 +0000 (0:00:01.053) 0:00:01.390 ********** 2026-03-09 01:08:09.434434 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-09 01:08:09.434451 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-09 01:08:09.434460 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-09 01:08:09.434468 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-09 01:08:09.434476 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-09 01:08:09.434485 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-09 01:08:09.434493 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-09 01:08:09.434501 | orchestrator | 2026-03-09 01:08:09.434509 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-09 01:08:09.434518 | orchestrator | 2026-03-09 01:08:09.434526 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-09 01:08:09.434533 | orchestrator | Monday 09 March 2026 01:07:26 +0000 (0:00:00.843) 0:00:02.234 ********** 2026-03-09 01:08:09.434540 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:08:09.434548 | orchestrator | 2026-03-09 01:08:09.434555 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-09 01:08:09.434562 | orchestrator | Monday 09 March 2026 01:07:28 +0000 (0:00:01.898) 0:00:04.132 ********** 2026-03-09 01:08:09.434568 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-09 01:08:09.434575 | orchestrator | 2026-03-09 01:08:09.434582 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-09 01:08:09.434588 | orchestrator | Monday 09 March 2026 01:07:32 +0000 (0:00:04.004) 0:00:08.137 ********** 2026-03-09 01:08:09.434595 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-09 01:08:09.434602 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-09 01:08:09.434609 | orchestrator | 2026-03-09 01:08:09.434615 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-09 01:08:09.434663 | orchestrator | Monday 09 March 2026 01:07:41 +0000 (0:00:09.021) 0:00:17.159 ********** 2026-03-09 01:08:09.434671 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-09 01:08:09.434678 | orchestrator | 2026-03-09 01:08:09.434685 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-09 01:08:09.434692 | orchestrator | Monday 09 March 2026 01:07:45 +0000 (0:00:04.135) 0:00:21.294 ********** 2026-03-09 01:08:09.434698 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-09 01:08:09.434705 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:08:09.434712 | orchestrator | 2026-03-09 01:08:09.434725 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-09 01:08:09.434733 | orchestrator | Monday 09 March 2026 01:07:50 +0000 (0:00:05.019) 0:00:26.314 ********** 2026-03-09 01:08:09.434740 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-09 01:08:09.434747 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-09 01:08:09.434753 | orchestrator | 2026-03-09 01:08:09.434760 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-09 01:08:09.434767 | orchestrator | Monday 09 March 2026 01:07:59 +0000 (0:00:09.098) 0:00:35.412 ********** 2026-03-09 01:08:09.434774 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-09 01:08:09.434781 | orchestrator | 2026-03-09 01:08:09.434787 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:08:09.434794 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:08:09.434801 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:08:09.434813 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:08:09.434820 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:08:09.434827 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:08:09.434833 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:08:09.434840 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:08:09.434847 | orchestrator | 2026-03-09 01:08:09.434853 | orchestrator | 2026-03-09 01:08:09.434860 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:08:09.434867 | orchestrator | Monday 09 March 2026 01:08:06 +0000 (0:00:06.409) 0:00:41.822 ********** 2026-03-09 01:08:09.434874 | orchestrator | =============================================================================== 2026-03-09 01:08:09.434881 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 9.10s 2026-03-09 01:08:09.434893 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 9.02s 2026-03-09 01:08:09.434900 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 6.41s 2026-03-09 01:08:09.434906 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 5.02s 2026-03-09 01:08:09.434913 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 4.14s 2026-03-09 01:08:09.434920 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.00s 2026-03-09 01:08:09.434926 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.90s 2026-03-09 01:08:09.434933 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.05s 2026-03-09 01:08:09.434939 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2026-03-09 01:08:09.435014 | orchestrator | 2026-03-09 01:08:09 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:08:09.436905 | orchestrator | 2026-03-09 01:08:09 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:08:09.441432 | orchestrator | 2026-03-09 01:08:09 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:08:09.444084 | orchestrator | 2026-03-09 01:08:09 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:08:09.444155 | orchestrator | 2026-03-09 01:08:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:12.475008 | orchestrator | 2026-03-09 01:08:12 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:08:12.476657 | orchestrator | 2026-03-09 01:08:12 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:08:12.478993 | orchestrator | 2026-03-09 01:08:12 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:08:12.481099 | orchestrator | 2026-03-09 01:08:12 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:08:12.481176 | orchestrator | 2026-03-09 01:08:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:15.508428 | orchestrator | 2026-03-09 01:08:15 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:08:15.508746 | orchestrator | 2026-03-09 01:08:15 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:08:15.509838 | orchestrator | 2026-03-09 01:08:15 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:08:15.511652 | orchestrator | 2026-03-09 01:08:15 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:08:15.511701 | orchestrator | 2026-03-09 01:08:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:18.545627 | orchestrator | 2026-03-09 01:08:18 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:08:18.546608 | orchestrator | 2026-03-09 01:08:18 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:08:18.548133 | orchestrator | 2026-03-09 01:08:18 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:08:18.549509 | orchestrator | 2026-03-09 01:08:18 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:08:18.549561 | orchestrator | 2026-03-09 01:08:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:21.580140 | orchestrator | 2026-03-09 01:08:21 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:08:21.581235 | orchestrator | 2026-03-09 01:08:21 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:08:21.586161 | orchestrator | 2026-03-09 01:08:21 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:08:21.589292 | orchestrator | 2026-03-09 01:08:21 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:08:21.589388 | orchestrator | 2026-03-09 01:08:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:24.621622 | orchestrator | 2026-03-09 01:08:24 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:08:24.622782 | orchestrator | 2026-03-09 01:08:24 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:08:24.623983 | orchestrator | 2026-03-09 01:08:24 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:08:24.625541 | orchestrator | 2026-03-09 01:08:24 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:08:24.625588 | orchestrator | 2026-03-09 01:08:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:27.662239 | orchestrator | 2026-03-09 01:08:27 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:08:27.663192 | orchestrator | 2026-03-09 01:08:27 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:08:27.664448 | orchestrator | 2026-03-09 01:08:27 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:08:27.666450 | orchestrator | 2026-03-09 01:08:27 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:08:27.666492 | orchestrator | 2026-03-09 01:08:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:30.704672 | orchestrator | 2026-03-09 01:08:30 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:08:30.708748 | orchestrator | 2026-03-09 01:08:30 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:08:30.711829 | orchestrator | 2026-03-09 01:08:30 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:08:30.712713 | orchestrator | 2026-03-09 01:08:30 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:08:30.712747 | orchestrator | 2026-03-09 01:08:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:33.774088 | orchestrator | 2026-03-09 01:08:33 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:08:33.775402 | orchestrator | 2026-03-09 01:08:33 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:08:33.779536 | orchestrator | 2026-03-09 01:08:33 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:08:33.780432 | orchestrator | 2026-03-09 01:08:33 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:08:33.780484 | orchestrator | 2026-03-09 01:08:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:36.905032 | orchestrator | 2026-03-09 01:08:36 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:08:36.909613 | orchestrator | 2026-03-09 01:08:36 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:08:36.913710 | orchestrator | 2026-03-09 01:08:36 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:08:36.916700 | orchestrator | 2026-03-09 01:08:36 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:08:36.916767 | orchestrator | 2026-03-09 01:08:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:39.960821 | orchestrator | 2026-03-09 01:08:39 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:08:39.964095 | orchestrator | 2026-03-09 01:08:39 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:08:39.964670 | orchestrator | 2026-03-09 01:08:39 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:08:39.965701 | orchestrator | 2026-03-09 01:08:39 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:08:39.965754 | orchestrator | 2026-03-09 01:08:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:43.028967 | orchestrator | 2026-03-09 01:08:43 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:08:43.030482 | orchestrator | 2026-03-09 01:08:43 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:08:43.032929 | orchestrator | 2026-03-09 01:08:43 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:08:43.034968 | orchestrator | 2026-03-09 01:08:43 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:08:43.035020 | orchestrator | 2026-03-09 01:08:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:46.074749 | orchestrator | 2026-03-09 01:08:46 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:08:46.078430 | orchestrator | 2026-03-09 01:08:46 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:08:46.079450 | orchestrator | 2026-03-09 01:08:46 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:08:46.080493 | orchestrator | 2026-03-09 01:08:46 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:08:46.080536 | orchestrator | 2026-03-09 01:08:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:49.117264 | orchestrator | 2026-03-09 01:08:49 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:08:49.121256 | orchestrator | 2026-03-09 01:08:49 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:08:49.123638 | orchestrator | 2026-03-09 01:08:49 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:08:49.128046 | orchestrator | 2026-03-09 01:08:49 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:08:49.130842 | orchestrator | 2026-03-09 01:08:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:52.199177 | orchestrator | 2026-03-09 01:08:52 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:08:52.201014 | orchestrator | 2026-03-09 01:08:52 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:08:52.201919 | orchestrator | 2026-03-09 01:08:52 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:08:52.203411 | orchestrator | 2026-03-09 01:08:52 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:08:52.203966 | orchestrator | 2026-03-09 01:08:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:55.331288 | orchestrator | 2026-03-09 01:08:55 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:08:55.335883 | orchestrator | 2026-03-09 01:08:55 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:08:55.341440 | orchestrator | 2026-03-09 01:08:55 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:08:55.343065 | orchestrator | 2026-03-09 01:08:55 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:08:55.343173 | orchestrator | 2026-03-09 01:08:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:08:58.438270 | orchestrator | 2026-03-09 01:08:58 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:08:58.441196 | orchestrator | 2026-03-09 01:08:58 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:08:58.443438 | orchestrator | 2026-03-09 01:08:58 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:08:58.444813 | orchestrator | 2026-03-09 01:08:58 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:08:58.444935 | orchestrator | 2026-03-09 01:08:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:01.513730 | orchestrator | 2026-03-09 01:09:01 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:09:01.514100 | orchestrator | 2026-03-09 01:09:01 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:09:01.515512 | orchestrator | 2026-03-09 01:09:01 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:09:01.516676 | orchestrator | 2026-03-09 01:09:01 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:09:01.516700 | orchestrator | 2026-03-09 01:09:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:04.564784 | orchestrator | 2026-03-09 01:09:04 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:09:04.564903 | orchestrator | 2026-03-09 01:09:04 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:09:04.566770 | orchestrator | 2026-03-09 01:09:04 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:09:04.568774 | orchestrator | 2026-03-09 01:09:04 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:09:04.568830 | orchestrator | 2026-03-09 01:09:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:07.610942 | orchestrator | 2026-03-09 01:09:07 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:09:07.611836 | orchestrator | 2026-03-09 01:09:07 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:09:07.612875 | orchestrator | 2026-03-09 01:09:07 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:09:07.614193 | orchestrator | 2026-03-09 01:09:07 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:09:07.616637 | orchestrator | 2026-03-09 01:09:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:10.656918 | orchestrator | 2026-03-09 01:09:10 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:09:10.657291 | orchestrator | 2026-03-09 01:09:10 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:09:10.658180 | orchestrator | 2026-03-09 01:09:10 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:09:10.659186 | orchestrator | 2026-03-09 01:09:10 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:09:10.659226 | orchestrator | 2026-03-09 01:09:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:13.688401 | orchestrator | 2026-03-09 01:09:13 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:09:13.689046 | orchestrator | 2026-03-09 01:09:13 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:09:13.689973 | orchestrator | 2026-03-09 01:09:13 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:09:13.692766 | orchestrator | 2026-03-09 01:09:13 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:09:13.692803 | orchestrator | 2026-03-09 01:09:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:16.786558 | orchestrator | 2026-03-09 01:09:16 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:09:16.786645 | orchestrator | 2026-03-09 01:09:16 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:09:16.787048 | orchestrator | 2026-03-09 01:09:16 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:09:16.787621 | orchestrator | 2026-03-09 01:09:16 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:09:16.787646 | orchestrator | 2026-03-09 01:09:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:19.837353 | orchestrator | 2026-03-09 01:09:19 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:09:19.837966 | orchestrator | 2026-03-09 01:09:19 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:09:19.839026 | orchestrator | 2026-03-09 01:09:19 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:09:19.839801 | orchestrator | 2026-03-09 01:09:19 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:09:19.840009 | orchestrator | 2026-03-09 01:09:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:22.878880 | orchestrator | 2026-03-09 01:09:22 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:09:22.879763 | orchestrator | 2026-03-09 01:09:22 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:09:22.880407 | orchestrator | 2026-03-09 01:09:22 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:09:22.881360 | orchestrator | 2026-03-09 01:09:22 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:09:22.881477 | orchestrator | 2026-03-09 01:09:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:25.923812 | orchestrator | 2026-03-09 01:09:25 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:09:25.924231 | orchestrator | 2026-03-09 01:09:25 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:09:25.926409 | orchestrator | 2026-03-09 01:09:25 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:09:25.927197 | orchestrator | 2026-03-09 01:09:25 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:09:25.927227 | orchestrator | 2026-03-09 01:09:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:28.965916 | orchestrator | 2026-03-09 01:09:28 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:09:28.968532 | orchestrator | 2026-03-09 01:09:28 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:09:28.971498 | orchestrator | 2026-03-09 01:09:28 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:09:28.972958 | orchestrator | 2026-03-09 01:09:28 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:09:28.973001 | orchestrator | 2026-03-09 01:09:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:32.026799 | orchestrator | 2026-03-09 01:09:32 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:09:32.028549 | orchestrator | 2026-03-09 01:09:32 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:09:32.029917 | orchestrator | 2026-03-09 01:09:32 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:09:32.031107 | orchestrator | 2026-03-09 01:09:32 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:09:32.031138 | orchestrator | 2026-03-09 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:35.134945 | orchestrator | 2026-03-09 01:09:35 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:09:35.135998 | orchestrator | 2026-03-09 01:09:35 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:09:35.139456 | orchestrator | 2026-03-09 01:09:35 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:09:35.141482 | orchestrator | 2026-03-09 01:09:35 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state STARTED 2026-03-09 01:09:35.141640 | orchestrator | 2026-03-09 01:09:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:38.176630 | orchestrator | 2026-03-09 01:09:38 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:09:38.179065 | orchestrator | 2026-03-09 01:09:38 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:09:38.180247 | orchestrator | 2026-03-09 01:09:38 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:09:38.184669 | orchestrator | 2026-03-09 01:09:38 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state STARTED 2026-03-09 01:09:38.187750 | orchestrator | 2026-03-09 01:09:38 | INFO  | Task 0aae5b24-f58f-41f5-be93-58ad3bb9942f is in state SUCCESS 2026-03-09 01:09:38.189647 | orchestrator | 2026-03-09 01:09:38.189682 | orchestrator | 2026-03-09 01:09:38.189691 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:09:38.189700 | orchestrator | 2026-03-09 01:09:38.189707 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:09:38.189715 | orchestrator | Monday 09 March 2026 01:05:31 +0000 (0:00:00.338) 0:00:00.338 ********** 2026-03-09 01:09:38.189723 | orchestrator | ok: [testbed-manager] 2026-03-09 01:09:38.189731 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:09:38.189738 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:09:38.189746 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:09:38.189753 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:09:38.189777 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:09:38.189785 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:09:38.189792 | orchestrator | 2026-03-09 01:09:38.189799 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:09:38.189819 | orchestrator | Monday 09 March 2026 01:05:32 +0000 (0:00:01.022) 0:00:01.360 ********** 2026-03-09 01:09:38.189844 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-09 01:09:38.189852 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-09 01:09:38.189860 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-09 01:09:38.189867 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-09 01:09:38.189875 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-09 01:09:38.189882 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-09 01:09:38.189889 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-09 01:09:38.189938 | orchestrator | 2026-03-09 01:09:38.189946 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-09 01:09:38.189953 | orchestrator | 2026-03-09 01:09:38.189960 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-09 01:09:38.189968 | orchestrator | Monday 09 March 2026 01:05:33 +0000 (0:00:00.888) 0:00:02.249 ********** 2026-03-09 01:09:38.189976 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:09:38.189984 | orchestrator | 2026-03-09 01:09:38.189991 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-09 01:09:38.189999 | orchestrator | Monday 09 March 2026 01:05:35 +0000 (0:00:01.765) 0:00:04.015 ********** 2026-03-09 01:09:38.190008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.190057 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-09 01:09:38.190066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.190075 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.190101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.190114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.190123 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.190131 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.190140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.190147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.190155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.190164 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.190207 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-09 01:09:38.190219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.190227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.190235 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.190243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.190271 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.190324 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.190339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.190494 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.190511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.190530 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.190538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.190546 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.190553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.190567 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.190595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.190617 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.190633 | orchestrator | 2026-03-09 01:09:38.190641 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-09 01:09:38.190649 | orchestrator | Monday 09 March 2026 01:05:39 +0000 (0:00:04.340) 0:00:08.355 ********** 2026-03-09 01:09:38.190666 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:09:38.190674 | orchestrator | 2026-03-09 01:09:38.190681 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-09 01:09:38.190688 | orchestrator | Monday 09 March 2026 01:05:41 +0000 (0:00:02.024) 0:00:10.380 ********** 2026-03-09 01:09:38.190696 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.190704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.190711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.190724 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-09 01:09:38.190737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.190745 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.190756 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.190764 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.190771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.190779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.190791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.190799 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.190811 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.190822 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.190830 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.190837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.190845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.190852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.190864 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.190872 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.190883 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.190891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.190902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.190909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.190917 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-09 01:09:38.190930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.190938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.191307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.191326 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.191334 | orchestrator | 2026-03-09 01:09:38.191342 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-09 01:09:38.191349 | orchestrator | Monday 09 March 2026 01:05:49 +0000 (0:00:07.948) 0:00:18.329 ********** 2026-03-09 01:09:38.191357 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-09 01:09:38.191365 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:38.191380 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:38.191404 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-09 01:09:38.191418 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:38.191426 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:09:38.191434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:38.191445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:38.191454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:38.191466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:38.191474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:38.191481 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:38.191489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:38.191497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:38.191508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:38.191518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:38.191526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:38.191534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:38.191549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:38.191557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:38.191564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:38.191572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:38.191580 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:38.191587 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:38.191599 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:38.191610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:38.191618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 01:09:38.191630 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:09:38.191637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:38.191645 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:38.191653 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 01:09:38.191661 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:09:38.191668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:38.191676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:38.191688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 01:09:38.191696 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:09:38.191703 | orchestrator | 2026-03-09 01:09:38.191725 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-09 01:09:38.191733 | orchestrator | Monday 09 March 2026 01:05:51 +0000 (0:00:01.843) 0:00:20.172 ********** 2026-03-09 01:09:38.191744 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-09 01:09:38.191773 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:38.191794 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:38.191802 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-09 01:09:38.191810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:38.191822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:38.191873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:38.191903 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:38.191912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:38.191920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:38.191927 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:09:38.191938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:38.191946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:38.191955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:38.191969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:38.191985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:38.191995 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:38.192003 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:38.192012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:38.192022 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:38.192031 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 01:09:38.192040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:38.192050 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:09:38.192058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:38.192088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:38.192101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:38.192112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-09 01:09:38.192120 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:38.192127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:38.192135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:38.192142 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 01:09:38.192150 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:09:38.192157 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-09 01:09:38.192165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-09 01:09:38.192176 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-09 01:09:38.192188 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:09:38.192195 | orchestrator | 2026-03-09 01:09:38.192202 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-09 01:09:38.192210 | orchestrator | Monday 09 March 2026 01:05:53 +0000 (0:00:02.463) 0:00:22.635 ********** 2026-03-09 01:09:38.192220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.192228 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-09 01:09:38.192236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.192243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.192251 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.192258 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.192274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.192282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.192292 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.192300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.192308 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.192316 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.192323 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.192331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.192349 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.192357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.192367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.192375 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.192383 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.192402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.192410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.192422 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.192433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.192441 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.192449 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-09 01:09:38.192457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.192465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.192473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.192484 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.192492 | orchestrator | 2026-03-09 01:09:38.192499 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-09 01:09:38.192507 | orchestrator | Monday 09 March 2026 01:06:00 +0000 (0:00:06.734) 0:00:29.369 ********** 2026-03-09 01:09:38.192514 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 01:09:38.192522 | orchestrator | 2026-03-09 01:09:38.192529 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-09 01:09:38.192540 | orchestrator | Monday 09 March 2026 01:06:03 +0000 (0:00:02.666) 0:00:32.036 ********** 2026-03-09 01:09:38.192568 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102859, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0940833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.192579 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102859, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0940833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:38.192587 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102859, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0940833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.192595 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102859, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0940833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.192603 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1102888, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1030996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.192615 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1102888, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1030996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.192627 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102859, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0940833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.192635 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102859, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0940833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.192645 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102859, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0940833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.192653 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1102888, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1030996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.192661 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1102888, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1030996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:38.192674 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1102888, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1030996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.192682 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1102888, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1030996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.192693 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1102846, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0934162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.192701 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1102846, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0934162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.192711 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1102846, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0934162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.192719 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1102888, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1030996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.192726 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102877, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1002066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.192739 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102877, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1002066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.192746 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102877, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1002066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.192758 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1102846, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0934162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.192765 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1102846, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0934162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193199 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1102846, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0934162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:38.193218 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102841, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0883763, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193226 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102841, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0883763, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193239 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102841, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0883763, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193247 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1102846, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0934162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193254 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102877, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1002066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193262 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102877, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1002066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193273 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102860, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0955892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193287 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102877, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1002066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193295 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102860, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0955892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193308 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102860, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0955892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193317 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1102869, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.099027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193325 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102841, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0883763, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193333 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102841, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0883763, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193347 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102861, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0965576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193361 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102877, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1002066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:38.193374 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102841, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0883763, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193382 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1102869, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.099027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193419 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102860, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0955892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193428 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1102869, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.099027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193436 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102858, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0937493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193448 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102860, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0955892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193462 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1102869, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.099027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193482 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102860, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0955892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193495 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102861, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0965576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193507 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102861, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0965576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193522 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102886, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1020508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193536 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102861, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0965576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193556 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1102869, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.099027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193575 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1102869, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.099027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193589 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102841, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0883763, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:38.193597 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102861, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0965576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193606 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102858, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0937493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193614 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102858, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0937493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193622 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102837, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.087663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193634 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102858, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0937493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193651 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102861, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0965576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193660 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102886, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1020508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193668 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102858, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0937493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193676 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102886, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1020508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193685 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102886, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1020508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193693 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102906, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.106467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193713 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102858, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0937493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193731 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102837, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.087663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193740 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102884, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1014552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193748 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102886, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1020508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193756 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102860, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0955892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:38.193765 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102837, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.087663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193773 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102837, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.087663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193787 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102906, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.106467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193806 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102886, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1020508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193816 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102837, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.087663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193826 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102843, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.08872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193836 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102906, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.106467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193845 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102884, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1014552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193855 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102906, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.106467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193868 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102906, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.106467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193886 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102837, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.087663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193897 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1102840, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.087975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193907 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1102869, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.099027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:38.193916 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102906, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.106467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193926 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102884, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1014552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193936 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102884, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1014552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193955 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102866, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0979578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193968 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102843, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.08872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193977 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102884, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1014552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193985 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1102840, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.087975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.193993 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102843, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.08872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194002 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102843, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.08872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194010 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102884, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1014552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194088 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102861, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0965576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:38.194103 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102864, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0972774, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194111 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1102840, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.087975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194120 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102866, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0979578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194128 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102843, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.08872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194136 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102843, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.08872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194144 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1102840, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.087975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194160 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102903, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1052485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194169 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:38.194181 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102866, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0979578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194189 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1102840, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.087975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194198 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102864, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0972774, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194206 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1102840, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.087975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194214 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102866, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0979578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194227 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102864, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0972774, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194239 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102858, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0937493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:38.194251 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102866, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0979578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194260 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102903, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1052485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194268 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:09:38.194277 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102866, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0979578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194285 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102903, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1052485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194293 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:38.194302 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102864, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0972774, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194315 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102864, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0972774, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194326 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102903, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1052485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194335 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:09:38.194347 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102864, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0972774, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194355 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102903, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1052485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194364 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:38.194372 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102903, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1052485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-09 01:09:38.194380 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:09:38.194403 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102886, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1020508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:38.194417 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102837, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.087663, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:38.194425 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102906, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.106467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:38.194437 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102884, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1014552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:38.194450 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102843, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.08872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:38.194459 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1102840, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.087975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:38.194467 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102866, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0979578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:38.194475 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102864, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0972774, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:38.194487 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102903, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.1052485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-09 01:09:38.194496 | orchestrator | 2026-03-09 01:09:38.194504 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-09 01:09:38.194512 | orchestrator | Monday 09 March 2026 01:06:42 +0000 (0:00:39.545) 0:01:11.582 ********** 2026-03-09 01:09:38.194520 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 01:09:38.194528 | orchestrator | 2026-03-09 01:09:38.194536 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-09 01:09:38.194544 | orchestrator | Monday 09 March 2026 01:06:43 +0000 (0:00:00.958) 0:01:12.540 ********** 2026-03-09 01:09:38.194552 | orchestrator | [WARNING]: Skipped 2026-03-09 01:09:38.194560 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:38.194568 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-09 01:09:38.194576 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:38.194584 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-09 01:09:38.194593 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:09:38.194601 | orchestrator | [WARNING]: Skipped 2026-03-09 01:09:38.194609 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:38.194617 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-09 01:09:38.194628 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:38.194636 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-09 01:09:38.194644 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-09 01:09:38.194652 | orchestrator | [WARNING]: Skipped 2026-03-09 01:09:38.194660 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:38.194668 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-09 01:09:38.194676 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:38.194687 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-09 01:09:38.194696 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 01:09:38.194704 | orchestrator | [WARNING]: Skipped 2026-03-09 01:09:38.194712 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:38.194720 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-09 01:09:38.194728 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:38.194735 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-09 01:09:38.194744 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-09 01:09:38.194752 | orchestrator | [WARNING]: Skipped 2026-03-09 01:09:38.194760 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:38.194767 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-09 01:09:38.194775 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:38.194783 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-09 01:09:38.194795 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-09 01:09:38.194803 | orchestrator | [WARNING]: Skipped 2026-03-09 01:09:38.194811 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:38.194819 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-09 01:09:38.194827 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:38.194835 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-09 01:09:38.194843 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-09 01:09:38.194851 | orchestrator | [WARNING]: Skipped 2026-03-09 01:09:38.194858 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:38.194866 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-09 01:09:38.194874 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-09 01:09:38.194882 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-09 01:09:38.194890 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-09 01:09:38.194898 | orchestrator | 2026-03-09 01:09:38.194906 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-09 01:09:38.194914 | orchestrator | Monday 09 March 2026 01:06:45 +0000 (0:00:02.326) 0:01:14.867 ********** 2026-03-09 01:09:38.194922 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-09 01:09:38.194930 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-09 01:09:38.194938 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:38.194946 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:38.194954 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-09 01:09:38.194962 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:09:38.194970 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-09 01:09:38.194978 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:38.194986 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-09 01:09:38.194994 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:09:38.195001 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-09 01:09:38.195010 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:09:38.195018 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-09 01:09:38.195026 | orchestrator | 2026-03-09 01:09:38.195034 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-09 01:09:38.195042 | orchestrator | Monday 09 March 2026 01:07:04 +0000 (0:00:19.008) 0:01:33.875 ********** 2026-03-09 01:09:38.195055 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-09 01:09:38.195070 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:38.195085 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-09 01:09:38.195107 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-09 01:09:38.195124 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:38.195139 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:38.195153 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-09 01:09:38.195168 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:09:38.195181 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-09 01:09:38.195195 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:09:38.195216 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-09 01:09:38.195244 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:09:38.195257 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-09 01:09:38.195270 | orchestrator | 2026-03-09 01:09:38.195282 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-09 01:09:38.195296 | orchestrator | Monday 09 March 2026 01:07:08 +0000 (0:00:03.118) 0:01:36.993 ********** 2026-03-09 01:09:38.195310 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-09 01:09:38.195332 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-09 01:09:38.195346 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:38.195361 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-09 01:09:38.195376 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:38.195438 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:38.195455 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-09 01:09:38.195469 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:09:38.195482 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-09 01:09:38.195496 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:09:38.195507 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-09 01:09:38.195516 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-09 01:09:38.195526 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:09:38.195536 | orchestrator | 2026-03-09 01:09:38.195546 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-09 01:09:38.195556 | orchestrator | Monday 09 March 2026 01:07:09 +0000 (0:00:01.742) 0:01:38.736 ********** 2026-03-09 01:09:38.195565 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 01:09:38.195575 | orchestrator | 2026-03-09 01:09:38.195585 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-09 01:09:38.195595 | orchestrator | Monday 09 March 2026 01:07:10 +0000 (0:00:00.936) 0:01:39.672 ********** 2026-03-09 01:09:38.195604 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:09:38.195614 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:38.195623 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:38.195633 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:38.195642 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:09:38.195652 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:09:38.195661 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:09:38.195671 | orchestrator | 2026-03-09 01:09:38.195680 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-09 01:09:38.195690 | orchestrator | Monday 09 March 2026 01:07:11 +0000 (0:00:00.805) 0:01:40.478 ********** 2026-03-09 01:09:38.195700 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:09:38.195709 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:09:38.195719 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:09:38.195729 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:09:38.195738 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:38.195748 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:09:38.195757 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:09:38.195767 | orchestrator | 2026-03-09 01:09:38.195776 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-09 01:09:38.195786 | orchestrator | Monday 09 March 2026 01:07:13 +0000 (0:00:02.312) 0:01:42.790 ********** 2026-03-09 01:09:38.195804 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-09 01:09:38.195814 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:38.195823 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-09 01:09:38.195833 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-09 01:09:38.195845 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-09 01:09:38.195861 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:38.195873 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:09:38.195883 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:38.195893 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-09 01:09:38.195903 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:09:38.195912 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-09 01:09:38.195922 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:09:38.195932 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-09 01:09:38.195941 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:09:38.195951 | orchestrator | 2026-03-09 01:09:38.195961 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-09 01:09:38.195971 | orchestrator | Monday 09 March 2026 01:07:15 +0000 (0:00:02.010) 0:01:44.801 ********** 2026-03-09 01:09:38.195981 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-09 01:09:38.195990 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:38.196006 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-09 01:09:38.196016 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-09 01:09:38.196026 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:09:38.196036 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:38.196045 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-09 01:09:38.196055 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:38.196073 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-09 01:09:38.196084 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-09 01:09:38.196094 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:09:38.196104 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-09 01:09:38.196113 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:09:38.196123 | orchestrator | 2026-03-09 01:09:38.196133 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-09 01:09:38.196143 | orchestrator | Monday 09 March 2026 01:07:18 +0000 (0:00:02.715) 0:01:47.516 ********** 2026-03-09 01:09:38.196153 | orchestrator | [WARNING]: Skipped 2026-03-09 01:09:38.196163 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-09 01:09:38.196172 | orchestrator | due to this access issue: 2026-03-09 01:09:38.196182 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-09 01:09:38.196192 | orchestrator | not a directory 2026-03-09 01:09:38.196202 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-09 01:09:38.196212 | orchestrator | 2026-03-09 01:09:38.196221 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-09 01:09:38.196231 | orchestrator | Monday 09 March 2026 01:07:20 +0000 (0:00:02.031) 0:01:49.547 ********** 2026-03-09 01:09:38.196241 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:09:38.196256 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:38.196266 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:38.196277 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:38.196293 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:09:38.196310 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:09:38.196327 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:09:38.196343 | orchestrator | 2026-03-09 01:09:38.196359 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-09 01:09:38.196375 | orchestrator | Monday 09 March 2026 01:07:21 +0000 (0:00:01.147) 0:01:50.695 ********** 2026-03-09 01:09:38.196446 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:09:38.196467 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:09:38.196483 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:09:38.196499 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:09:38.196517 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:09:38.196535 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:09:38.196553 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:09:38.196571 | orchestrator | 2026-03-09 01:09:38.196590 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-09 01:09:38.196608 | orchestrator | Monday 09 March 2026 01:07:22 +0000 (0:00:01.072) 0:01:51.768 ********** 2026-03-09 01:09:38.196629 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-09 01:09:38.196647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.196671 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.196693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.196704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.196731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.196742 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.196753 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.196763 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-09 01:09:38.196773 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.196789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.196806 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.196824 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.196834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.196845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.196855 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.196866 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-09 01:09:38.196882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.196895 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.196909 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.196917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.196925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.196934 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.196942 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.196950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.196962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.196975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-09 01:09:38.196988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.196996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-09 01:09:38.197005 | orchestrator | 2026-03-09 01:09:38.197013 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-09 01:09:38.197021 | orchestrator | Monday 09 March 2026 01:07:27 +0000 (0:00:04.826) 0:01:56.595 ********** 2026-03-09 01:09:38.197029 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-09 01:09:38.197037 | orchestrator | skipping: [testbed-manager] 2026-03-09 01:09:38.197045 | orchestrator | 2026-03-09 01:09:38.197053 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-09 01:09:38.197061 | orchestrator | Monday 09 March 2026 01:07:29 +0000 (0:00:01.573) 0:01:58.168 ********** 2026-03-09 01:09:38.197069 | orchestrator | 2026-03-09 01:09:38.197077 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-09 01:09:38.197085 | orchestrator | Monday 09 March 2026 01:07:29 +0000 (0:00:00.078) 0:01:58.247 ********** 2026-03-09 01:09:38.197093 | orchestrator | 2026-03-09 01:09:38.197101 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-09 01:09:38.197109 | orchestrator | Monday 09 March 2026 01:07:29 +0000 (0:00:00.070) 0:01:58.318 ********** 2026-03-09 01:09:38.197117 | orchestrator | 2026-03-09 01:09:38.197125 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-09 01:09:38.197133 | orchestrator | Monday 09 March 2026 01:07:29 +0000 (0:00:00.117) 0:01:58.435 ********** 2026-03-09 01:09:38.197141 | orchestrator | 2026-03-09 01:09:38.197149 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-09 01:09:38.197157 | orchestrator | Monday 09 March 2026 01:07:29 +0000 (0:00:00.325) 0:01:58.761 ********** 2026-03-09 01:09:38.197165 | orchestrator | 2026-03-09 01:09:38.197173 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-09 01:09:38.197181 | orchestrator | Monday 09 March 2026 01:07:29 +0000 (0:00:00.086) 0:01:58.847 ********** 2026-03-09 01:09:38.197189 | orchestrator | 2026-03-09 01:09:38.197197 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-09 01:09:38.197205 | orchestrator | Monday 09 March 2026 01:07:29 +0000 (0:00:00.083) 0:01:58.931 ********** 2026-03-09 01:09:38.197213 | orchestrator | 2026-03-09 01:09:38.197221 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-09 01:09:38.197229 | orchestrator | Monday 09 March 2026 01:07:30 +0000 (0:00:00.107) 0:01:59.038 ********** 2026-03-09 01:09:38.197237 | orchestrator | changed: [testbed-manager] 2026-03-09 01:09:38.197245 | orchestrator | 2026-03-09 01:09:38.197253 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-09 01:09:38.197265 | orchestrator | Monday 09 March 2026 01:07:48 +0000 (0:00:18.504) 0:02:17.543 ********** 2026-03-09 01:09:38.197273 | orchestrator | changed: [testbed-manager] 2026-03-09 01:09:38.197281 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:38.197289 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:09:38.197297 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:09:38.197306 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:09:38.197314 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:09:38.197322 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:09:38.197330 | orchestrator | 2026-03-09 01:09:38.197338 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-09 01:09:38.197346 | orchestrator | Monday 09 March 2026 01:08:05 +0000 (0:00:16.956) 0:02:34.500 ********** 2026-03-09 01:09:38.197354 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:09:38.197363 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:38.197371 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:09:38.197378 | orchestrator | 2026-03-09 01:09:38.197407 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-09 01:09:38.197431 | orchestrator | Monday 09 March 2026 01:08:16 +0000 (0:00:11.299) 0:02:45.800 ********** 2026-03-09 01:09:38.197446 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:38.197460 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:09:38.197468 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:09:38.197476 | orchestrator | 2026-03-09 01:09:38.197484 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-09 01:09:38.197492 | orchestrator | Monday 09 March 2026 01:08:27 +0000 (0:00:10.712) 0:02:56.513 ********** 2026-03-09 01:09:38.197500 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:09:38.197514 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:09:38.197527 | orchestrator | changed: [testbed-manager] 2026-03-09 01:09:38.197542 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:09:38.197562 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:09:38.197576 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:09:38.197589 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:38.197604 | orchestrator | 2026-03-09 01:09:38.197617 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-09 01:09:38.197630 | orchestrator | Monday 09 March 2026 01:08:45 +0000 (0:00:18.367) 0:03:14.881 ********** 2026-03-09 01:09:38.197643 | orchestrator | changed: [testbed-manager] 2026-03-09 01:09:38.197657 | orchestrator | 2026-03-09 01:09:38.197670 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-09 01:09:38.197684 | orchestrator | Monday 09 March 2026 01:08:58 +0000 (0:00:13.049) 0:03:27.930 ********** 2026-03-09 01:09:38.197697 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:09:38.197709 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:09:38.197723 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:09:38.197736 | orchestrator | 2026-03-09 01:09:38.197748 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-09 01:09:38.197762 | orchestrator | Monday 09 March 2026 01:09:11 +0000 (0:00:12.289) 0:03:40.219 ********** 2026-03-09 01:09:38.197776 | orchestrator | changed: [testbed-manager] 2026-03-09 01:09:38.197790 | orchestrator | 2026-03-09 01:09:38.197801 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-09 01:09:38.197809 | orchestrator | Monday 09 March 2026 01:09:21 +0000 (0:00:10.086) 0:03:50.306 ********** 2026-03-09 01:09:38.197817 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:09:38.197825 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:09:38.197833 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:09:38.197841 | orchestrator | 2026-03-09 01:09:38.197848 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:09:38.197857 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-09 01:09:38.197873 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-09 01:09:38.197881 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-09 01:09:38.197889 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-09 01:09:38.197897 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-09 01:09:38.197905 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-09 01:09:38.197913 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-09 01:09:38.197921 | orchestrator | 2026-03-09 01:09:38.197929 | orchestrator | 2026-03-09 01:09:38.197938 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:09:38.197946 | orchestrator | Monday 09 March 2026 01:09:35 +0000 (0:00:14.202) 0:04:04.509 ********** 2026-03-09 01:09:38.197954 | orchestrator | =============================================================================== 2026-03-09 01:09:38.197962 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 39.55s 2026-03-09 01:09:38.197970 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 19.01s 2026-03-09 01:09:38.197978 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 18.50s 2026-03-09 01:09:38.197986 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 18.37s 2026-03-09 01:09:38.197994 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 16.96s 2026-03-09 01:09:38.198002 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 14.20s 2026-03-09 01:09:38.198010 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 13.05s 2026-03-09 01:09:38.198046 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 12.29s 2026-03-09 01:09:38.198054 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 11.30s 2026-03-09 01:09:38.198062 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.71s 2026-03-09 01:09:38.198070 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.09s 2026-03-09 01:09:38.198078 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 7.95s 2026-03-09 01:09:38.198086 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.73s 2026-03-09 01:09:38.198094 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.83s 2026-03-09 01:09:38.198110 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.34s 2026-03-09 01:09:38.198119 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.12s 2026-03-09 01:09:38.198127 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.72s 2026-03-09 01:09:38.198135 | orchestrator | prometheus : Find custom prometheus alert rules files ------------------- 2.67s 2026-03-09 01:09:38.198143 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.46s 2026-03-09 01:09:38.198151 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.33s 2026-03-09 01:09:38.198167 | orchestrator | 2026-03-09 01:09:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:41.303189 | orchestrator | 2026-03-09 01:09:41 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:09:41.305265 | orchestrator | 2026-03-09 01:09:41 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:09:41.308684 | orchestrator | 2026-03-09 01:09:41 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:09:41.309521 | orchestrator | 2026-03-09 01:09:41 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state STARTED 2026-03-09 01:09:41.309555 | orchestrator | 2026-03-09 01:09:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:44.361935 | orchestrator | 2026-03-09 01:09:44 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:09:44.365691 | orchestrator | 2026-03-09 01:09:44 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:09:44.368252 | orchestrator | 2026-03-09 01:09:44 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:09:44.368665 | orchestrator | 2026-03-09 01:09:44 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state STARTED 2026-03-09 01:09:44.368724 | orchestrator | 2026-03-09 01:09:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:47.411362 | orchestrator | 2026-03-09 01:09:47 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:09:47.412632 | orchestrator | 2026-03-09 01:09:47 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:09:47.414125 | orchestrator | 2026-03-09 01:09:47 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:09:47.415550 | orchestrator | 2026-03-09 01:09:47 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state STARTED 2026-03-09 01:09:47.415596 | orchestrator | 2026-03-09 01:09:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:50.449908 | orchestrator | 2026-03-09 01:09:50 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:09:50.450793 | orchestrator | 2026-03-09 01:09:50 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:09:50.452184 | orchestrator | 2026-03-09 01:09:50 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:09:50.454234 | orchestrator | 2026-03-09 01:09:50 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state STARTED 2026-03-09 01:09:50.454288 | orchestrator | 2026-03-09 01:09:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:53.511008 | orchestrator | 2026-03-09 01:09:53 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:09:53.512385 | orchestrator | 2026-03-09 01:09:53 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:09:53.514272 | orchestrator | 2026-03-09 01:09:53 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:09:53.515840 | orchestrator | 2026-03-09 01:09:53 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state STARTED 2026-03-09 01:09:53.516079 | orchestrator | 2026-03-09 01:09:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:56.575646 | orchestrator | 2026-03-09 01:09:56 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:09:56.577883 | orchestrator | 2026-03-09 01:09:56 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:09:56.579564 | orchestrator | 2026-03-09 01:09:56 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:09:56.581491 | orchestrator | 2026-03-09 01:09:56 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state STARTED 2026-03-09 01:09:56.581699 | orchestrator | 2026-03-09 01:09:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:09:59.629233 | orchestrator | 2026-03-09 01:09:59 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:09:59.631608 | orchestrator | 2026-03-09 01:09:59 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:09:59.633251 | orchestrator | 2026-03-09 01:09:59 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:09:59.634687 | orchestrator | 2026-03-09 01:09:59 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state STARTED 2026-03-09 01:09:59.634746 | orchestrator | 2026-03-09 01:09:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:02.678800 | orchestrator | 2026-03-09 01:10:02 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:10:02.679937 | orchestrator | 2026-03-09 01:10:02 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:10:02.681647 | orchestrator | 2026-03-09 01:10:02 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:10:02.682879 | orchestrator | 2026-03-09 01:10:02 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state STARTED 2026-03-09 01:10:02.682906 | orchestrator | 2026-03-09 01:10:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:05.726992 | orchestrator | 2026-03-09 01:10:05 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:10:05.729187 | orchestrator | 2026-03-09 01:10:05 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:10:05.730741 | orchestrator | 2026-03-09 01:10:05 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:10:05.731145 | orchestrator | 2026-03-09 01:10:05 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state STARTED 2026-03-09 01:10:05.731211 | orchestrator | 2026-03-09 01:10:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:08.784935 | orchestrator | 2026-03-09 01:10:08 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:10:08.787308 | orchestrator | 2026-03-09 01:10:08 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:10:08.791290 | orchestrator | 2026-03-09 01:10:08 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:10:08.793336 | orchestrator | 2026-03-09 01:10:08 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state STARTED 2026-03-09 01:10:08.793370 | orchestrator | 2026-03-09 01:10:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:11.834122 | orchestrator | 2026-03-09 01:10:11 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:10:11.837495 | orchestrator | 2026-03-09 01:10:11 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:10:11.841477 | orchestrator | 2026-03-09 01:10:11 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:10:11.842314 | orchestrator | 2026-03-09 01:10:11 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state STARTED 2026-03-09 01:10:11.842351 | orchestrator | 2026-03-09 01:10:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:14.886455 | orchestrator | 2026-03-09 01:10:14 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:10:14.888302 | orchestrator | 2026-03-09 01:10:14 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:10:14.891547 | orchestrator | 2026-03-09 01:10:14 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:10:14.892879 | orchestrator | 2026-03-09 01:10:14 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state STARTED 2026-03-09 01:10:14.893167 | orchestrator | 2026-03-09 01:10:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:17.941326 | orchestrator | 2026-03-09 01:10:17 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:10:17.942969 | orchestrator | 2026-03-09 01:10:17 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:10:17.944464 | orchestrator | 2026-03-09 01:10:17 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:10:17.945827 | orchestrator | 2026-03-09 01:10:17 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state STARTED 2026-03-09 01:10:17.945869 | orchestrator | 2026-03-09 01:10:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:20.983230 | orchestrator | 2026-03-09 01:10:20 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:10:20.984166 | orchestrator | 2026-03-09 01:10:20 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:10:20.985105 | orchestrator | 2026-03-09 01:10:20 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:10:20.985792 | orchestrator | 2026-03-09 01:10:20 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state STARTED 2026-03-09 01:10:20.985807 | orchestrator | 2026-03-09 01:10:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:24.042834 | orchestrator | 2026-03-09 01:10:24 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:10:24.042926 | orchestrator | 2026-03-09 01:10:24 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:10:24.044808 | orchestrator | 2026-03-09 01:10:24 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:10:24.046218 | orchestrator | 2026-03-09 01:10:24 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state STARTED 2026-03-09 01:10:24.046275 | orchestrator | 2026-03-09 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:10:27.092273 | orchestrator | 2026-03-09 01:10:27 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state STARTED 2026-03-09 01:10:27.093008 | orchestrator | 2026-03-09 01:10:27 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:10:27.094170 | orchestrator | 2026-03-09 01:10:27 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state STARTED 2026-03-09 01:12:27.199706 | orchestrator | 2026-03-09 01:12:27 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state STARTED 2026-03-09 01:12:27.199809 | orchestrator | 2026-03-09 01:12:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:30.232832 | orchestrator | 2026-03-09 01:12:30.232916 | orchestrator | 2026-03-09 01:12:30 | INFO  | Task a992b23c-fc4f-4dc5-9c2a-958daa932914 is in state SUCCESS 2026-03-09 01:12:30.234291 | orchestrator | 2026-03-09 01:12:30.234389 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:12:30.234412 | orchestrator | 2026-03-09 01:12:30.234428 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:12:30.234444 | orchestrator | Monday 09 March 2026 01:08:00 +0000 (0:00:00.361) 0:00:00.361 ********** 2026-03-09 01:12:30.234488 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:12:30.234506 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:12:30.234521 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:12:30.234536 | orchestrator | 2026-03-09 01:12:30.234551 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:12:30.234567 | orchestrator | Monday 09 March 2026 01:08:01 +0000 (0:00:00.387) 0:00:00.748 ********** 2026-03-09 01:12:30.234615 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-09 01:12:30.234633 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-09 01:12:30.234649 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-09 01:12:30.234663 | orchestrator | 2026-03-09 01:12:30.234678 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-09 01:12:30.234720 | orchestrator | 2026-03-09 01:12:30.235031 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-09 01:12:30.235048 | orchestrator | Monday 09 March 2026 01:08:02 +0000 (0:00:00.613) 0:00:01.362 ********** 2026-03-09 01:12:30.235057 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:12:30.235067 | orchestrator | 2026-03-09 01:12:30.235075 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-09 01:12:30.235083 | orchestrator | Monday 09 March 2026 01:08:02 +0000 (0:00:00.766) 0:00:02.129 ********** 2026-03-09 01:12:30.235092 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-09 01:12:30.235101 | orchestrator | 2026-03-09 01:12:30.235110 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-09 01:12:30.235118 | orchestrator | Monday 09 March 2026 01:08:06 +0000 (0:00:04.076) 0:00:06.206 ********** 2026-03-09 01:12:30.235132 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-09 01:12:30.235145 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-09 01:12:30.235185 | orchestrator | 2026-03-09 01:12:30.235203 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-09 01:12:30.235216 | orchestrator | Monday 09 March 2026 01:08:14 +0000 (0:00:07.166) 0:00:13.373 ********** 2026-03-09 01:12:30.235230 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-09 01:12:30.235243 | orchestrator | 2026-03-09 01:12:30.235255 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-09 01:12:30.235269 | orchestrator | Monday 09 March 2026 01:08:17 +0000 (0:00:03.624) 0:00:16.998 ********** 2026-03-09 01:12:30.235282 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-09 01:12:30.235295 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:12:30.235309 | orchestrator | 2026-03-09 01:12:30.235323 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-09 01:12:30.235336 | orchestrator | Monday 09 March 2026 01:08:21 +0000 (0:00:04.245) 0:00:21.243 ********** 2026-03-09 01:12:30.235368 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:12:30.235383 | orchestrator | 2026-03-09 01:12:30.235398 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-09 01:12:30.235412 | orchestrator | Monday 09 March 2026 01:08:25 +0000 (0:00:03.967) 0:00:25.211 ********** 2026-03-09 01:12:30.235426 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-09 01:12:30.235440 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-09 01:12:30.235477 | orchestrator | 2026-03-09 01:12:30.235495 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-09 01:12:30.235510 | orchestrator | Monday 09 March 2026 01:08:35 +0000 (0:00:09.589) 0:00:34.800 ********** 2026-03-09 01:12:30.235525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:12:30.235599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:12:30.235625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:12:30.235635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.235651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.235661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.235670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.235693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.235703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.235712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.235725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.235735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.235750 | orchestrator | 2026-03-09 01:12:30.235759 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-09 01:12:30.235768 | orchestrator | Monday 09 March 2026 01:08:39 +0000 (0:00:04.047) 0:00:38.848 ********** 2026-03-09 01:12:30.235776 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:12:30.235784 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:12:30.235793 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:12:30.235802 | orchestrator | 2026-03-09 01:12:30.235811 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-09 01:12:30.235820 | orchestrator | Monday 09 March 2026 01:08:39 +0000 (0:00:00.363) 0:00:39.211 ********** 2026-03-09 01:12:30.235828 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:12:30.235837 | orchestrator | 2026-03-09 01:12:30.235845 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-09 01:12:30.235854 | orchestrator | Monday 09 March 2026 01:08:41 +0000 (0:00:01.267) 0:00:40.479 ********** 2026-03-09 01:12:30.235867 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-09 01:12:30.235876 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-09 01:12:30.235884 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-09 01:12:30.235893 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-09 01:12:30.235902 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-09 01:12:30.235910 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-09 01:12:30.235918 | orchestrator | 2026-03-09 01:12:30.235934 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-09 01:12:30.235943 | orchestrator | Monday 09 March 2026 01:08:44 +0000 (0:00:03.305) 0:00:43.785 ********** 2026-03-09 01:12:30.235953 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-09 01:12:30.235963 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-09 01:12:30.235978 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-09 01:12:30.235993 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-09 01:12:30.236007 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-09 01:12:30.236016 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-09 01:12:30.236025 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-09 01:12:30.236038 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-09 01:12:30.236052 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-09 01:12:30.236066 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-09 01:12:30.236077 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-09 01:12:30.236086 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-09 01:12:30.236095 | orchestrator | 2026-03-09 01:12:30.236103 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-09 01:12:30.236111 | orchestrator | Monday 09 March 2026 01:08:50 +0000 (0:00:05.855) 0:00:49.640 ********** 2026-03-09 01:12:30.236119 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-09 01:12:30.236129 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-09 01:12:30.236143 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-09 01:12:30.236152 | orchestrator | 2026-03-09 01:12:30.236160 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-09 01:12:30.236168 | orchestrator | Monday 09 March 2026 01:08:53 +0000 (0:00:03.531) 0:00:53.171 ********** 2026-03-09 01:12:30.236177 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-09 01:12:30.236189 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-09 01:12:30.236198 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-09 01:12:30.236206 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-09 01:12:30.236216 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-09 01:12:30.236224 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-09 01:12:30.236232 | orchestrator | 2026-03-09 01:12:30.236240 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-09 01:12:30.236249 | orchestrator | Monday 09 March 2026 01:08:58 +0000 (0:00:04.594) 0:00:57.766 ********** 2026-03-09 01:12:30.236257 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-09 01:12:30.236265 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-09 01:12:30.236273 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-09 01:12:30.236282 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-09 01:12:30.236290 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-09 01:12:30.236298 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-09 01:12:30.236306 | orchestrator | 2026-03-09 01:12:30.236314 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-09 01:12:30.236333 | orchestrator | Monday 09 March 2026 01:08:59 +0000 (0:00:01.544) 0:00:59.310 ********** 2026-03-09 01:12:30.236341 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:12:30.236350 | orchestrator | 2026-03-09 01:12:30.236358 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-09 01:12:30.236366 | orchestrator | Monday 09 March 2026 01:09:00 +0000 (0:00:00.264) 0:00:59.574 ********** 2026-03-09 01:12:30.236375 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:12:30.236383 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:12:30.236391 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:12:30.236399 | orchestrator | 2026-03-09 01:12:30.236407 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-09 01:12:30.236415 | orchestrator | Monday 09 March 2026 01:09:01 +0000 (0:00:00.931) 0:01:00.506 ********** 2026-03-09 01:12:30.236424 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:12:30.236432 | orchestrator | 2026-03-09 01:12:30.236440 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-09 01:12:30.236510 | orchestrator | Monday 09 March 2026 01:09:02 +0000 (0:00:01.303) 0:01:01.810 ********** 2026-03-09 01:12:30.236524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:12:30.236541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:12:30.236555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:12:30.236565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.236574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.236590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.236600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.236617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.236631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.236640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.236649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.237113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.237146 | orchestrator | 2026-03-09 01:12:30.237160 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-09 01:12:30.237188 | orchestrator | Monday 09 March 2026 01:09:06 +0000 (0:00:04.466) 0:01:06.276 ********** 2026-03-09 01:12:30.237198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-09 01:12:30.237207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.237222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.237231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.237240 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:12:30.237258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-09 01:12:30.237286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.237296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.237304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.237317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-09 01:12:30.237325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.237341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.237356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.237364 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:12:30.237372 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:12:30.237381 | orchestrator | 2026-03-09 01:12:30.237389 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-09 01:12:30.237397 | orchestrator | Monday 09 March 2026 01:09:08 +0000 (0:00:01.408) 0:01:07.685 ********** 2026-03-09 01:12:30.237406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-09 01:12:30.237419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.237428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.237441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.237477 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:12:30.237487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-09 01:12:30.237496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.237508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.237517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.237526 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:12:30.237534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-09 01:12:30.237553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.237562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.237570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.237579 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:12:30.237588 | orchestrator | 2026-03-09 01:12:30.237596 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-09 01:12:30.237604 | orchestrator | Monday 09 March 2026 01:09:10 +0000 (0:00:01.735) 0:01:09.421 ********** 2026-03-09 01:12:30.237616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:12:30.237626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:12:30.237643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:12:30.237652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.237661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.237675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.237685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.237695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.237715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.237725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.237735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.237749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.237759 | orchestrator | 2026-03-09 01:12:30.237768 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-09 01:12:30.237778 | orchestrator | Monday 09 March 2026 01:09:15 +0000 (0:00:04.980) 0:01:14.402 ********** 2026-03-09 01:12:30.237787 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-09 01:12:30.237809 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-09 01:12:30.237818 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-09 01:12:30.237833 | orchestrator | 2026-03-09 01:12:30.237842 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-09 01:12:30.237852 | orchestrator | Monday 09 March 2026 01:09:17 +0000 (0:00:02.826) 0:01:17.229 ********** 2026-03-09 01:12:30.237866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:12:30.237877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:12:30.237887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:12:30.237897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.237914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.237929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.237942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.237953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.237962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.237972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.237986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.238001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.238011 | orchestrator | 2026-03-09 01:12:30.238072 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-09 01:12:30.238083 | orchestrator | Monday 09 March 2026 01:09:36 +0000 (0:00:18.819) 0:01:36.048 ********** 2026-03-09 01:12:30.238093 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:12:30.238101 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:12:30.238110 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:12:30.238118 | orchestrator | 2026-03-09 01:12:30.238126 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-09 01:12:30.238140 | orchestrator | Monday 09 March 2026 01:09:38 +0000 (0:00:02.038) 0:01:38.086 ********** 2026-03-09 01:12:30.238149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-09 01:12:30.238157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.238166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.238179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.238192 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:12:30.238201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-09 01:12:30.238217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.238226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.238234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.238242 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:12:30.238255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-09 01:12:30.238269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.238278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.238291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-09 01:12:30.238299 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:12:30.238308 | orchestrator | 2026-03-09 01:12:30.238316 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-09 01:12:30.238324 | orchestrator | Monday 09 March 2026 01:09:40 +0000 (0:00:01.649) 0:01:39.736 ********** 2026-03-09 01:12:30.238332 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:12:30.238340 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:12:30.238348 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:12:30.238356 | orchestrator | 2026-03-09 01:12:30.238364 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-09 01:12:30.238373 | orchestrator | Monday 09 March 2026 01:09:41 +0000 (0:00:00.710) 0:01:40.446 ********** 2026-03-09 01:12:30.238381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:12:30.238399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:12:30.238408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-09 01:12:30.238422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.238431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.238439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.238447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.238489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.238499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.238512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.238520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.238529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-09 01:12:30.238542 | orchestrator | 2026-03-09 01:12:30.238550 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-09 01:12:30.238559 | orchestrator | Monday 09 March 2026 01:09:45 +0000 (0:00:04.207) 0:01:44.653 ********** 2026-03-09 01:12:30.238568 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:12:30.238576 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:12:30.238584 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:12:30.238592 | orchestrator | 2026-03-09 01:12:30.238612 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-09 01:12:30.238621 | orchestrator | Monday 09 March 2026 01:09:45 +0000 (0:00:00.707) 0:01:45.361 ********** 2026-03-09 01:12:30.238629 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:12:30.238637 | orchestrator | 2026-03-09 01:12:30.238645 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-09 01:12:30.238653 | orchestrator | Monday 09 March 2026 01:09:48 +0000 (0:00:02.360) 0:01:47.721 ********** 2026-03-09 01:12:30.238661 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:12:30.238669 | orchestrator | 2026-03-09 01:12:30.238677 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-09 01:12:30.238685 | orchestrator | Monday 09 March 2026 01:09:51 +0000 (0:00:02.802) 0:01:50.524 ********** 2026-03-09 01:12:30.238693 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:12:30.238701 | orchestrator | 2026-03-09 01:12:30.238713 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-09 01:12:30.238721 | orchestrator | Monday 09 March 2026 01:10:11 +0000 (0:00:20.636) 0:02:11.160 ********** 2026-03-09 01:12:30.238729 | orchestrator | 2026-03-09 01:12:30.238877 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-09 01:12:30.238887 | orchestrator | Monday 09 March 2026 01:10:11 +0000 (0:00:00.080) 0:02:11.241 ********** 2026-03-09 01:12:30.238895 | orchestrator | 2026-03-09 01:12:30.238903 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-09 01:12:30.238921 | orchestrator | Monday 09 March 2026 01:10:11 +0000 (0:00:00.086) 0:02:11.328 ********** 2026-03-09 01:12:30.238929 | orchestrator | 2026-03-09 01:12:30.238937 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-09 01:12:30.238945 | orchestrator | Monday 09 March 2026 01:10:12 +0000 (0:00:00.098) 0:02:11.426 ********** 2026-03-09 01:12:30.238953 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:12:30.238961 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:12:30.238970 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:12:30.238977 | orchestrator | 2026-03-09 01:12:30.238986 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-09 01:12:30.238994 | orchestrator | Monday 09 March 2026 01:10:37 +0000 (0:00:25.173) 0:02:36.600 ********** 2026-03-09 01:12:30.239016 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:12:30.239025 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:12:30.239033 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:12:30.239041 | orchestrator | 2026-03-09 01:12:30.239049 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-09 01:12:30.239057 | orchestrator | Monday 09 March 2026 01:10:47 +0000 (0:00:10.722) 0:02:47.323 ********** 2026-03-09 01:12:30.239065 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:12:30.239073 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:12:30.239081 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:12:30.239089 | orchestrator | 2026-03-09 01:12:30.239097 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-09 01:12:30.239105 | orchestrator | Monday 09 March 2026 01:11:12 +0000 (0:00:24.372) 0:03:11.695 ********** 2026-03-09 01:12:30.239113 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:12:30.239121 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:12:30.239129 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:12:30.239144 | orchestrator | 2026-03-09 01:12:30.239152 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-09 01:12:30.239167 | orchestrator | Monday 09 March 2026 01:11:24 +0000 (0:00:12.242) 0:03:23.938 ********** 2026-03-09 01:12:30.239175 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:12:30.239183 | orchestrator | 2026-03-09 01:12:30.239202 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:12:30.239211 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-09 01:12:30.239219 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:12:30.239227 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:12:30.239235 | orchestrator | 2026-03-09 01:12:30.239244 | orchestrator | 2026-03-09 01:12:30.239251 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:12:30.239259 | orchestrator | Monday 09 March 2026 01:11:24 +0000 (0:00:00.283) 0:03:24.221 ********** 2026-03-09 01:12:30.239268 | orchestrator | =============================================================================== 2026-03-09 01:12:30.239275 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 25.17s 2026-03-09 01:12:30.239283 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 24.37s 2026-03-09 01:12:30.239291 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.64s 2026-03-09 01:12:30.239299 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 18.82s 2026-03-09 01:12:30.239307 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 12.24s 2026-03-09 01:12:30.239315 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.72s 2026-03-09 01:12:30.239323 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 9.59s 2026-03-09 01:12:30.239331 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.17s 2026-03-09 01:12:30.239339 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 5.86s 2026-03-09 01:12:30.239347 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.98s 2026-03-09 01:12:30.239355 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 4.59s 2026-03-09 01:12:30.239363 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.47s 2026-03-09 01:12:30.239371 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.25s 2026-03-09 01:12:30.239379 | orchestrator | cinder : Check cinder containers ---------------------------------------- 4.21s 2026-03-09 01:12:30.239387 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 4.08s 2026-03-09 01:12:30.239395 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 4.04s 2026-03-09 01:12:30.239403 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.97s 2026-03-09 01:12:30.239411 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.63s 2026-03-09 01:12:30.239426 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 3.53s 2026-03-09 01:12:30.239434 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 3.31s 2026-03-09 01:12:30.239446 | orchestrator | 2026-03-09 01:12:30 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:12:30.239907 | orchestrator | 2026-03-09 01:12:30.239935 | orchestrator | 2026-03-09 01:12:30.239944 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:12:30.239953 | orchestrator | 2026-03-09 01:12:30.239961 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:12:30.239981 | orchestrator | Monday 09 March 2026 01:07:34 +0000 (0:00:00.523) 0:00:00.523 ********** 2026-03-09 01:12:30.240003 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:12:30.240012 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:12:30.240021 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:12:30.240030 | orchestrator | 2026-03-09 01:12:30.240039 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:12:30.240048 | orchestrator | Monday 09 March 2026 01:07:35 +0000 (0:00:00.479) 0:00:01.002 ********** 2026-03-09 01:12:30.240057 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-09 01:12:30.240066 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-09 01:12:30.240075 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-09 01:12:30.240084 | orchestrator | 2026-03-09 01:12:30.240093 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-09 01:12:30.240107 | orchestrator | 2026-03-09 01:12:30.240121 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-09 01:12:30.240142 | orchestrator | Monday 09 March 2026 01:07:35 +0000 (0:00:00.588) 0:00:01.591 ********** 2026-03-09 01:12:30.240161 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:12:30.240175 | orchestrator | 2026-03-09 01:12:30.240219 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-09 01:12:30.240234 | orchestrator | Monday 09 March 2026 01:07:36 +0000 (0:00:00.719) 0:00:02.310 ********** 2026-03-09 01:12:30.240263 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-09 01:12:30.240278 | orchestrator | 2026-03-09 01:12:30.240292 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-09 01:12:30.240306 | orchestrator | Monday 09 March 2026 01:07:40 +0000 (0:00:03.521) 0:00:05.831 ********** 2026-03-09 01:12:30.240321 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-09 01:12:30.240336 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-09 01:12:30.240352 | orchestrator | 2026-03-09 01:12:30.240371 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-09 01:12:30.240393 | orchestrator | Monday 09 March 2026 01:07:47 +0000 (0:00:07.274) 0:00:13.106 ********** 2026-03-09 01:12:30.240406 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-09 01:12:30.240419 | orchestrator | 2026-03-09 01:12:30.240434 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-09 01:12:30.240449 | orchestrator | Monday 09 March 2026 01:07:51 +0000 (0:00:04.083) 0:00:17.190 ********** 2026-03-09 01:12:30.240640 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-09 01:12:30.240675 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:12:30.240695 | orchestrator | 2026-03-09 01:12:30.240706 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-09 01:12:30.240717 | orchestrator | Monday 09 March 2026 01:07:56 +0000 (0:00:04.787) 0:00:21.977 ********** 2026-03-09 01:12:30.240728 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:12:30.240737 | orchestrator | 2026-03-09 01:12:30.240746 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-09 01:12:30.240755 | orchestrator | Monday 09 March 2026 01:08:00 +0000 (0:00:03.907) 0:00:25.885 ********** 2026-03-09 01:12:30.240764 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-09 01:12:30.240773 | orchestrator | 2026-03-09 01:12:30.240782 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-09 01:12:30.240791 | orchestrator | Monday 09 March 2026 01:08:04 +0000 (0:00:04.405) 0:00:30.290 ********** 2026-03-09 01:12:30.240855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:12:30.240881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:12:30.240892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:12:30.240907 | orchestrator | 2026-03-09 01:12:30.240916 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-09 01:12:30.240978 | orchestrator | Monday 09 March 2026 01:08:10 +0000 (0:00:05.506) 0:00:35.797 ********** 2026-03-09 01:12:30.241001 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:12:30.241014 | orchestrator | 2026-03-09 01:12:30.241027 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-09 01:12:30.241049 | orchestrator | Monday 09 March 2026 01:08:10 +0000 (0:00:00.643) 0:00:36.440 ********** 2026-03-09 01:12:30.241062 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:12:30.241074 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:12:30.241085 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:12:30.241097 | orchestrator | 2026-03-09 01:12:30.241108 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-09 01:12:30.241120 | orchestrator | Monday 09 March 2026 01:08:14 +0000 (0:00:04.118) 0:00:40.558 ********** 2026-03-09 01:12:30.241132 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-09 01:12:30.241145 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-09 01:12:30.241156 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-09 01:12:30.241168 | orchestrator | 2026-03-09 01:12:30.241179 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-09 01:12:30.241192 | orchestrator | Monday 09 March 2026 01:08:16 +0000 (0:00:01.643) 0:00:42.202 ********** 2026-03-09 01:12:30.241205 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-09 01:12:30.241217 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-09 01:12:30.241230 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-09 01:12:30.241243 | orchestrator | 2026-03-09 01:12:30.241255 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-09 01:12:30.241268 | orchestrator | Monday 09 March 2026 01:08:17 +0000 (0:00:01.257) 0:00:43.460 ********** 2026-03-09 01:12:30.241281 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:12:30.241294 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:12:30.241307 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:12:30.241320 | orchestrator | 2026-03-09 01:12:30.241333 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-09 01:12:30.241345 | orchestrator | Monday 09 March 2026 01:08:18 +0000 (0:00:00.875) 0:00:44.335 ********** 2026-03-09 01:12:30.241358 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:12:30.241370 | orchestrator | 2026-03-09 01:12:30.241383 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-09 01:12:30.241396 | orchestrator | Monday 09 March 2026 01:08:19 +0000 (0:00:00.459) 0:00:44.794 ********** 2026-03-09 01:12:30.241409 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:12:30.241432 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:12:30.241440 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:12:30.241448 | orchestrator | 2026-03-09 01:12:30.241478 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-09 01:12:30.241486 | orchestrator | Monday 09 March 2026 01:08:19 +0000 (0:00:00.360) 0:00:45.155 ********** 2026-03-09 01:12:30.241494 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:12:30.241502 | orchestrator | 2026-03-09 01:12:30.241510 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-09 01:12:30.241518 | orchestrator | Monday 09 March 2026 01:08:20 +0000 (0:00:00.667) 0:00:45.822 ********** 2026-03-09 01:12:30.241534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:12:30.241554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:12:30.241579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:12:30.241588 | orchestrator | 2026-03-09 01:12:30.241596 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-09 01:12:30.241604 | orchestrator | Monday 09 March 2026 01:08:25 +0000 (0:00:05.131) 0:00:50.954 ********** 2026-03-09 01:12:30.241623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 01:12:30.241633 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:12:30.241642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 01:12:30.241668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 01:12:30.241677 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:12:30.241685 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:12:30.241693 | orchestrator | 2026-03-09 01:12:30.241701 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-09 01:12:30.241709 | orchestrator | Monday 09 March 2026 01:08:30 +0000 (0:00:05.443) 0:00:56.397 ********** 2026-03-09 01:12:30.241718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 01:12:30.241733 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:12:30.241745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 01:12:30.241755 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:12:30.241769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-09 01:12:30.241790 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:12:30.241798 | orchestrator | 2026-03-09 01:12:30.241806 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-09 01:12:30.241814 | orchestrator | Monday 09 March 2026 01:08:40 +0000 (0:00:09.241) 0:01:05.638 ********** 2026-03-09 01:12:30.241822 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:12:30.241830 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:12:30.241838 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:12:30.241846 | orchestrator | 2026-03-09 01:12:30.241854 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-09 01:12:30.241862 | orchestrator | Monday 09 March 2026 01:08:46 +0000 (0:00:06.336) 0:01:11.974 ********** 2026-03-09 01:12:30.241870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:12:30.241894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:12:30.241909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:12:30.241918 | orchestrator | 2026-03-09 01:12:30.241926 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-09 01:12:30.241934 | orchestrator | Monday 09 March 2026 01:08:53 +0000 (0:00:06.869) 0:01:18.844 ********** 2026-03-09 01:12:30.241942 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:12:30.241950 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:12:30.241958 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:12:30.241966 | orchestrator | 2026-03-09 01:12:30.241974 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-09 01:12:30.241983 | orchestrator | Monday 09 March 2026 01:09:02 +0000 (0:00:09.744) 0:01:28.589 ********** 2026-03-09 01:12:30.241997 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:12:30.242005 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:12:30.242013 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:12:30.242074 | orchestrator | 2026-03-09 01:12:30.242082 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-09 01:12:30.242091 | orchestrator | Monday 09 March 2026 01:09:07 +0000 (0:00:04.920) 0:01:33.510 ********** 2026-03-09 01:12:30.242098 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:12:30.242106 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:12:30.242114 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:12:30.242127 | orchestrator | 2026-03-09 01:12:30.242135 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-09 01:12:30.242144 | orchestrator | Monday 09 March 2026 01:09:12 +0000 (0:00:04.227) 0:01:37.738 ********** 2026-03-09 01:12:30.242152 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:12:30.242182 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:12:30.242190 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:12:30.242199 | orchestrator | 2026-03-09 01:12:30.242206 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-09 01:12:30.242214 | orchestrator | Monday 09 March 2026 01:09:17 +0000 (0:00:05.027) 0:01:42.765 ********** 2026-03-09 01:12:30.242222 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:12:30.242231 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:12:30.242239 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:12:30.242247 | orchestrator | 2026-03-09 01:12:30.242255 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-09 01:12:30.242263 | orchestrator | Monday 09 March 2026 01:09:22 +0000 (0:00:05.602) 0:01:48.368 ********** 2026-03-09 01:12:30.242271 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:12:30.242279 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:12:30.242288 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:12:30.242303 | orchestrator | 2026-03-09 01:12:30.242311 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-09 01:12:30.242319 | orchestrator | Monday 09 March 2026 01:09:23 +0000 (0:00:00.924) 0:01:49.292 ********** 2026-03-09 01:12:30.242328 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-09 01:12:30.242336 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:12:30.242344 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-09 01:12:30.242352 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:12:30.242360 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-09 01:12:30.242368 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:12:30.242376 | orchestrator | 2026-03-09 01:12:30.242384 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-09 01:12:30.242392 | orchestrator | Monday 09 March 2026 01:09:30 +0000 (0:00:07.306) 0:01:56.599 ********** 2026-03-09 01:12:30.242400 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:12:30.242408 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:12:30.242416 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:12:30.242424 | orchestrator | 2026-03-09 01:12:30.242431 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-09 01:12:30.242439 | orchestrator | Monday 09 March 2026 01:09:38 +0000 (0:00:07.380) 0:02:03.980 ********** 2026-03-09 01:12:30.242448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:12:30.242527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:12:30.242539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-09 01:12:30.242548 | orchestrator | 2026-03-09 01:12:30.242556 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-09 01:12:30.242564 | orchestrator | Monday 09 March 2026 01:09:44 +0000 (0:00:06.143) 0:02:10.123 ********** 2026-03-09 01:12:30.242572 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:12:30.242580 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:12:30.242588 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:12:30.242602 | orchestrator | 2026-03-09 01:12:30.242610 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-09 01:12:30.242618 | orchestrator | Monday 09 March 2026 01:09:44 +0000 (0:00:00.434) 0:02:10.557 ********** 2026-03-09 01:12:30.242632 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:12:30.242642 | orchestrator | 2026-03-09 01:12:30.242650 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-09 01:12:30.242658 | orchestrator | Monday 09 March 2026 01:09:47 +0000 (0:00:02.201) 0:02:12.759 ********** 2026-03-09 01:12:30.242666 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:12:30.242674 | orchestrator | 2026-03-09 01:12:30.242682 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-09 01:12:30.242690 | orchestrator | Monday 09 March 2026 01:09:49 +0000 (0:00:02.696) 0:02:15.455 ********** 2026-03-09 01:12:30.242698 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:12:30.242706 | orchestrator | 2026-03-09 01:12:30.242714 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-09 01:12:30.242722 | orchestrator | Monday 09 March 2026 01:09:52 +0000 (0:00:02.205) 0:02:17.660 ********** 2026-03-09 01:12:30.242730 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:12:30.242737 | orchestrator | 2026-03-09 01:12:30.242745 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-09 01:12:30.242753 | orchestrator | Monday 09 March 2026 01:10:23 +0000 (0:00:30.961) 0:02:48.622 ********** 2026-03-09 01:12:30.242761 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:12:30.242769 | orchestrator | 2026-03-09 01:12:30.242781 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-09 01:12:30.242789 | orchestrator | Monday 09 March 2026 01:10:25 +0000 (0:00:02.396) 0:02:51.019 ********** 2026-03-09 01:12:30.242797 | orchestrator | 2026-03-09 01:12:30.242810 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-09 01:12:30.242818 | orchestrator | Monday 09 March 2026 01:10:25 +0000 (0:00:00.355) 0:02:51.374 ********** 2026-03-09 01:12:30.242826 | orchestrator | 2026-03-09 01:12:30.242834 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-09 01:12:30.242842 | orchestrator | Monday 09 March 2026 01:10:25 +0000 (0:00:00.080) 0:02:51.455 ********** 2026-03-09 01:12:30.242850 | orchestrator | 2026-03-09 01:12:30.242858 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-09 01:12:30.242866 | orchestrator | Monday 09 March 2026 01:10:25 +0000 (0:00:00.074) 0:02:51.530 ********** 2026-03-09 01:12:30.242874 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:12:30.242887 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:12:30.242897 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:12:30.242905 | orchestrator | 2026-03-09 01:12:30.242913 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:12:30.242922 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-09 01:12:30.242931 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-09 01:12:30.242940 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-09 01:12:30.242947 | orchestrator | 2026-03-09 01:12:30.242955 | orchestrator | 2026-03-09 01:12:30.242963 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:12:30.242971 | orchestrator | Monday 09 March 2026 01:11:04 +0000 (0:00:38.494) 0:03:30.024 ********** 2026-03-09 01:12:30.242979 | orchestrator | =============================================================================== 2026-03-09 01:12:30.242987 | orchestrator | glance : Restart glance-api container ---------------------------------- 38.49s 2026-03-09 01:12:30.242995 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 30.96s 2026-03-09 01:12:30.243009 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 9.74s 2026-03-09 01:12:30.243017 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 9.24s 2026-03-09 01:12:30.243025 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 7.38s 2026-03-09 01:12:30.243033 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 7.31s 2026-03-09 01:12:30.243041 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.27s 2026-03-09 01:12:30.243049 | orchestrator | glance : Copying over config.json files for services -------------------- 6.87s 2026-03-09 01:12:30.243057 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 6.34s 2026-03-09 01:12:30.243065 | orchestrator | glance : Check glance containers ---------------------------------------- 6.14s 2026-03-09 01:12:30.243073 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.60s 2026-03-09 01:12:30.243081 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.51s 2026-03-09 01:12:30.243089 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 5.44s 2026-03-09 01:12:30.243097 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.13s 2026-03-09 01:12:30.243106 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 5.03s 2026-03-09 01:12:30.243114 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.92s 2026-03-09 01:12:30.243121 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.79s 2026-03-09 01:12:30.243129 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.41s 2026-03-09 01:12:30.243137 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.23s 2026-03-09 01:12:30.243145 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.12s 2026-03-09 01:12:30.243153 | orchestrator | 2026-03-09 01:12:30 | INFO  | Task 99ccd622-14b8-4338-8926-9ab114f08de7 is in state SUCCESS 2026-03-09 01:12:30.243161 | orchestrator | 2026-03-09 01:12:30 | INFO  | Task 7e5c9f15-d787-4b79-882a-b8466012c726 is in state STARTED 2026-03-09 01:12:30.243169 | orchestrator | 2026-03-09 01:12:30 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state STARTED 2026-03-09 01:12:30.243177 | orchestrator | 2026-03-09 01:12:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:33.276939 | orchestrator | 2026-03-09 01:12:33 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:12:33.280055 | orchestrator | 2026-03-09 01:12:33 | INFO  | Task 7e5c9f15-d787-4b79-882a-b8466012c726 is in state STARTED 2026-03-09 01:12:33.282326 | orchestrator | 2026-03-09 01:12:33 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state STARTED 2026-03-09 01:12:33.282400 | orchestrator | 2026-03-09 01:12:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:36.334714 | orchestrator | 2026-03-09 01:12:36 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:12:36.335620 | orchestrator | 2026-03-09 01:12:36 | INFO  | Task 7e5c9f15-d787-4b79-882a-b8466012c726 is in state STARTED 2026-03-09 01:12:36.337381 | orchestrator | 2026-03-09 01:12:36 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state STARTED 2026-03-09 01:12:36.337404 | orchestrator | 2026-03-09 01:12:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:39.371697 | orchestrator | 2026-03-09 01:12:39 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:12:39.373191 | orchestrator | 2026-03-09 01:12:39 | INFO  | Task 7e5c9f15-d787-4b79-882a-b8466012c726 is in state STARTED 2026-03-09 01:12:39.374365 | orchestrator | 2026-03-09 01:12:39 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state STARTED 2026-03-09 01:12:39.374439 | orchestrator | 2026-03-09 01:12:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:42.414289 | orchestrator | 2026-03-09 01:12:42 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:12:42.415155 | orchestrator | 2026-03-09 01:12:42 | INFO  | Task 7e5c9f15-d787-4b79-882a-b8466012c726 is in state STARTED 2026-03-09 01:12:42.416996 | orchestrator | 2026-03-09 01:12:42 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state STARTED 2026-03-09 01:12:42.417044 | orchestrator | 2026-03-09 01:12:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:45.456860 | orchestrator | 2026-03-09 01:12:45 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:12:45.456956 | orchestrator | 2026-03-09 01:12:45 | INFO  | Task 7e5c9f15-d787-4b79-882a-b8466012c726 is in state STARTED 2026-03-09 01:12:45.457318 | orchestrator | 2026-03-09 01:12:45 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state STARTED 2026-03-09 01:12:45.457334 | orchestrator | 2026-03-09 01:12:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:48.498346 | orchestrator | 2026-03-09 01:12:48 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:12:48.499726 | orchestrator | 2026-03-09 01:12:48 | INFO  | Task 7e5c9f15-d787-4b79-882a-b8466012c726 is in state STARTED 2026-03-09 01:12:48.501678 | orchestrator | 2026-03-09 01:12:48 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state STARTED 2026-03-09 01:12:48.501736 | orchestrator | 2026-03-09 01:12:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:51.587917 | orchestrator | 2026-03-09 01:12:51 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:12:51.589088 | orchestrator | 2026-03-09 01:12:51 | INFO  | Task 7e5c9f15-d787-4b79-882a-b8466012c726 is in state STARTED 2026-03-09 01:12:51.593919 | orchestrator | 2026-03-09 01:12:51 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state STARTED 2026-03-09 01:12:51.596285 | orchestrator | 2026-03-09 01:12:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:54.632305 | orchestrator | 2026-03-09 01:12:54 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:12:54.633435 | orchestrator | 2026-03-09 01:12:54 | INFO  | Task 7e5c9f15-d787-4b79-882a-b8466012c726 is in state STARTED 2026-03-09 01:12:54.635823 | orchestrator | 2026-03-09 01:12:54 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:12:54.637100 | orchestrator | 2026-03-09 01:12:54 | INFO  | Task 1f788bd4-21c7-45ae-97cd-538ec37dda67 is in state SUCCESS 2026-03-09 01:12:54.637395 | orchestrator | 2026-03-09 01:12:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:12:57.672326 | orchestrator | 2026-03-09 01:12:57 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:12:57.673277 | orchestrator | 2026-03-09 01:12:57 | INFO  | Task 7e5c9f15-d787-4b79-882a-b8466012c726 is in state STARTED 2026-03-09 01:12:57.674539 | orchestrator | 2026-03-09 01:12:57 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:12:57.674566 | orchestrator | 2026-03-09 01:12:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:00.721357 | orchestrator | 2026-03-09 01:13:00 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:13:00.723187 | orchestrator | 2026-03-09 01:13:00 | INFO  | Task 7e5c9f15-d787-4b79-882a-b8466012c726 is in state STARTED 2026-03-09 01:13:00.723843 | orchestrator | 2026-03-09 01:13:00 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:13:00.723906 | orchestrator | 2026-03-09 01:13:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:03.763526 | orchestrator | 2026-03-09 01:13:03 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:13:03.765689 | orchestrator | 2026-03-09 01:13:03 | INFO  | Task 7e5c9f15-d787-4b79-882a-b8466012c726 is in state STARTED 2026-03-09 01:13:03.767529 | orchestrator | 2026-03-09 01:13:03 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:13:03.767750 | orchestrator | 2026-03-09 01:13:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:06.803625 | orchestrator | 2026-03-09 01:13:06 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:13:06.805581 | orchestrator | 2026-03-09 01:13:06 | INFO  | Task 7e5c9f15-d787-4b79-882a-b8466012c726 is in state STARTED 2026-03-09 01:13:06.808956 | orchestrator | 2026-03-09 01:13:06 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:13:06.809017 | orchestrator | 2026-03-09 01:13:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:09.843399 | orchestrator | 2026-03-09 01:13:09 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:13:09.843976 | orchestrator | 2026-03-09 01:13:09 | INFO  | Task 7e5c9f15-d787-4b79-882a-b8466012c726 is in state STARTED 2026-03-09 01:13:09.845413 | orchestrator | 2026-03-09 01:13:09 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:13:09.845455 | orchestrator | 2026-03-09 01:13:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:12.883832 | orchestrator | 2026-03-09 01:13:12 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:13:12.885704 | orchestrator | 2026-03-09 01:13:12 | INFO  | Task 7e5c9f15-d787-4b79-882a-b8466012c726 is in state STARTED 2026-03-09 01:13:12.887803 | orchestrator | 2026-03-09 01:13:12 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:13:12.887884 | orchestrator | 2026-03-09 01:13:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:15.936098 | orchestrator | 2026-03-09 01:13:15 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:13:15.936215 | orchestrator | 2026-03-09 01:13:15 | INFO  | Task 7e5c9f15-d787-4b79-882a-b8466012c726 is in state STARTED 2026-03-09 01:13:15.936233 | orchestrator | 2026-03-09 01:13:15 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:13:15.936245 | orchestrator | 2026-03-09 01:13:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:18.980202 | orchestrator | 2026-03-09 01:13:18 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:13:18.980745 | orchestrator | 2026-03-09 01:13:18 | INFO  | Task 7e5c9f15-d787-4b79-882a-b8466012c726 is in state STARTED 2026-03-09 01:13:18.982223 | orchestrator | 2026-03-09 01:13:18 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:13:18.982263 | orchestrator | 2026-03-09 01:13:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:22.022730 | orchestrator | 2026-03-09 01:13:22 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:13:22.023519 | orchestrator | 2026-03-09 01:13:22 | INFO  | Task 7e5c9f15-d787-4b79-882a-b8466012c726 is in state STARTED 2026-03-09 01:13:22.024077 | orchestrator | 2026-03-09 01:13:22 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:13:22.024153 | orchestrator | 2026-03-09 01:13:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:25.092109 | orchestrator | 2026-03-09 01:13:25 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:13:25.095967 | orchestrator | 2026-03-09 01:13:25 | INFO  | Task 7e5c9f15-d787-4b79-882a-b8466012c726 is in state STARTED 2026-03-09 01:13:25.097821 | orchestrator | 2026-03-09 01:13:25 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:13:25.098144 | orchestrator | 2026-03-09 01:13:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:28.135758 | orchestrator | 2026-03-09 01:13:28 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:13:28.136580 | orchestrator | 2026-03-09 01:13:28 | INFO  | Task 7e5c9f15-d787-4b79-882a-b8466012c726 is in state STARTED 2026-03-09 01:13:28.139802 | orchestrator | 2026-03-09 01:13:28 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:13:28.139863 | orchestrator | 2026-03-09 01:13:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:31.187911 | orchestrator | 2026-03-09 01:13:31 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:13:31.193362 | orchestrator | 2026-03-09 01:13:31 | INFO  | Task 7e5c9f15-d787-4b79-882a-b8466012c726 is in state SUCCESS 2026-03-09 01:13:31.195989 | orchestrator | 2026-03-09 01:13:31.196062 | orchestrator | 2026-03-09 01:13:31.196076 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:13:31.196090 | orchestrator | 2026-03-09 01:13:31.196101 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:13:31.196114 | orchestrator | Monday 09 March 2026 01:09:44 +0000 (0:00:00.343) 0:00:00.343 ********** 2026-03-09 01:13:31.196125 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:13:31.196137 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:13:31.196148 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:13:31.196159 | orchestrator | 2026-03-09 01:13:31.196171 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:13:31.196182 | orchestrator | Monday 09 March 2026 01:09:44 +0000 (0:00:00.391) 0:00:00.735 ********** 2026-03-09 01:13:31.196193 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-09 01:13:31.196205 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-09 01:13:31.196216 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-09 01:13:31.196227 | orchestrator | 2026-03-09 01:13:31.196238 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-09 01:13:31.196249 | orchestrator | 2026-03-09 01:13:31.196260 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-09 01:13:31.196271 | orchestrator | Monday 09 March 2026 01:09:45 +0000 (0:00:01.095) 0:00:01.830 ********** 2026-03-09 01:13:31.196282 | orchestrator | 2026-03-09 01:13:31.196293 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-03-09 01:13:31.196304 | orchestrator | 2026-03-09 01:13:31.196316 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-03-09 01:13:31.196327 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:13:31.196338 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:13:31.196349 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:13:31.196360 | orchestrator | 2026-03-09 01:13:31.196371 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:13:31.196383 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:13:31.196396 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:13:31.196408 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:13:31.196446 | orchestrator | 2026-03-09 01:13:31.196458 | orchestrator | 2026-03-09 01:13:31.196497 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:13:31.196509 | orchestrator | Monday 09 March 2026 01:12:51 +0000 (0:03:05.137) 0:03:06.968 ********** 2026-03-09 01:13:31.196741 | orchestrator | =============================================================================== 2026-03-09 01:13:31.196755 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 185.14s 2026-03-09 01:13:31.196768 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.10s 2026-03-09 01:13:31.196781 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.39s 2026-03-09 01:13:31.196794 | orchestrator | 2026-03-09 01:13:31.196805 | orchestrator | 2026-03-09 01:13:31.196816 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:13:31.196827 | orchestrator | 2026-03-09 01:13:31.196839 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:13:31.196850 | orchestrator | Monday 09 March 2026 01:11:10 +0000 (0:00:00.343) 0:00:00.343 ********** 2026-03-09 01:13:31.196861 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:13:31.196873 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:13:31.196884 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:13:31.196895 | orchestrator | 2026-03-09 01:13:31.196906 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:13:31.196917 | orchestrator | Monday 09 March 2026 01:11:11 +0000 (0:00:00.345) 0:00:00.688 ********** 2026-03-09 01:13:31.196928 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-09 01:13:31.196939 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-09 01:13:31.196951 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-09 01:13:31.196962 | orchestrator | 2026-03-09 01:13:31.196973 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-09 01:13:31.196984 | orchestrator | 2026-03-09 01:13:31.196995 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-09 01:13:31.197006 | orchestrator | Monday 09 March 2026 01:11:11 +0000 (0:00:00.495) 0:00:01.183 ********** 2026-03-09 01:13:31.197017 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:13:31.197028 | orchestrator | 2026-03-09 01:13:31.197040 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-09 01:13:31.197050 | orchestrator | Monday 09 March 2026 01:11:12 +0000 (0:00:00.586) 0:00:01.770 ********** 2026-03-09 01:13:31.197089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:13:31.197139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:13:31.197177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:13:31.197198 | orchestrator | 2026-03-09 01:13:31.197217 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-09 01:13:31.197237 | orchestrator | Monday 09 March 2026 01:11:12 +0000 (0:00:00.834) 0:00:02.605 ********** 2026-03-09 01:13:31.197258 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-09 01:13:31.197271 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-09 01:13:31.197282 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:13:31.197293 | orchestrator | 2026-03-09 01:13:31.197305 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-09 01:13:31.197315 | orchestrator | Monday 09 March 2026 01:11:14 +0000 (0:00:01.245) 0:00:03.850 ********** 2026-03-09 01:13:31.197327 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:13:31.197338 | orchestrator | 2026-03-09 01:13:31.197392 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-09 01:13:31.197405 | orchestrator | Monday 09 March 2026 01:11:15 +0000 (0:00:01.192) 0:00:05.042 ********** 2026-03-09 01:13:31.197417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:13:31.197429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:13:31.197457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:13:31.197503 | orchestrator | 2026-03-09 01:13:31.197515 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-09 01:13:31.197534 | orchestrator | Monday 09 March 2026 01:11:16 +0000 (0:00:01.623) 0:00:06.666 ********** 2026-03-09 01:13:31.197546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-09 01:13:31.197558 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:13:31.197570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-09 01:13:31.197582 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:13:31.197593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-09 01:13:31.197605 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:13:31.197616 | orchestrator | 2026-03-09 01:13:31.197627 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-09 01:13:31.197638 | orchestrator | Monday 09 March 2026 01:11:17 +0000 (0:00:00.460) 0:00:07.127 ********** 2026-03-09 01:13:31.197650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-09 01:13:31.197668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-09 01:13:31.197687 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:13:31.197698 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:13:31.197719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-09 01:13:31.197731 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:13:31.197742 | orchestrator | 2026-03-09 01:13:31.197754 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-09 01:13:31.197765 | orchestrator | Monday 09 March 2026 01:11:18 +0000 (0:00:00.982) 0:00:08.110 ********** 2026-03-09 01:13:31.197776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:13:31.197788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:13:31.197800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:13:31.197812 | orchestrator | 2026-03-09 01:13:31.197823 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-09 01:13:31.197835 | orchestrator | Monday 09 March 2026 01:11:19 +0000 (0:00:01.350) 0:00:09.461 ********** 2026-03-09 01:13:31.197846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:13:31.197876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:13:31.197889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:13:31.197900 | orchestrator | 2026-03-09 01:13:31.197912 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-09 01:13:31.197923 | orchestrator | Monday 09 March 2026 01:11:21 +0000 (0:00:01.535) 0:00:10.997 ********** 2026-03-09 01:13:31.197934 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:13:31.197945 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:13:31.197956 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:13:31.197967 | orchestrator | 2026-03-09 01:13:31.197979 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-09 01:13:31.197990 | orchestrator | Monday 09 March 2026 01:11:21 +0000 (0:00:00.533) 0:00:11.530 ********** 2026-03-09 01:13:31.198002 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-09 01:13:31.198013 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-09 01:13:31.198081 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-09 01:13:31.198093 | orchestrator | 2026-03-09 01:13:31.198105 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-09 01:13:31.198116 | orchestrator | Monday 09 March 2026 01:11:23 +0000 (0:00:01.326) 0:00:12.857 ********** 2026-03-09 01:13:31.198127 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-09 01:13:31.198138 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-09 01:13:31.198150 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-09 01:13:31.198162 | orchestrator | 2026-03-09 01:13:31.198173 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-09 01:13:31.198183 | orchestrator | Monday 09 March 2026 01:11:24 +0000 (0:00:01.367) 0:00:14.224 ********** 2026-03-09 01:13:31.198194 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:13:31.198205 | orchestrator | 2026-03-09 01:13:31.198216 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-09 01:13:31.198228 | orchestrator | Monday 09 March 2026 01:11:25 +0000 (0:00:01.020) 0:00:15.244 ********** 2026-03-09 01:13:31.198239 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-09 01:13:31.198249 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-09 01:13:31.198261 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:13:31.198279 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:13:31.198291 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:13:31.198302 | orchestrator | 2026-03-09 01:13:31.198312 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-09 01:13:31.198323 | orchestrator | Monday 09 March 2026 01:11:26 +0000 (0:00:00.807) 0:00:16.052 ********** 2026-03-09 01:13:31.198334 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:13:31.198345 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:13:31.198356 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:13:31.198367 | orchestrator | 2026-03-09 01:13:31.198378 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-09 01:13:31.198389 | orchestrator | Monday 09 March 2026 01:11:26 +0000 (0:00:00.631) 0:00:16.683 ********** 2026-03-09 01:13:31.198401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1102533, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0042734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1102533, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0042734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1102533, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0042734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1102560, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0266526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1102560, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0266526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1102560, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0266526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1102649, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0385032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1102649, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0385032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1102649, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0385032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1102555, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0100822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1102555, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0100822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1102555, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0100822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1102656, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0415611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1102656, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0415611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1102656, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0415611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1102544, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0060823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1102544, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0060823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1102544, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0060823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1102617, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0299287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1102617, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0299287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1102617, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0299287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1102640, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.035205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1102640, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.035205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1102640, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.035205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1102531, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0028834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1102531, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0028834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1102531, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0028834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1102539, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0058963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1102539, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0058963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1102539, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0058963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1102557, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0117157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1102557, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0117157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1102557, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0117157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1102625, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0320055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.198993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1102625, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0320055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1102625, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0320055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1102644, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0372262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1102644, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0372262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1102644, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0372262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1102552, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0090823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1102552, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0090823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1102638, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.035205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1102552, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0090823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1102638, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.035205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1102666, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0430815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1102638, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.035205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1102666, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0430815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1102621, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0305111, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1102666, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0430815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1102621, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0305111, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1102612, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0295167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1102621, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0305111, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1102612, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0295167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1102604, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0278068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1102612, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0295167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1102604, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0278068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1102629, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0342004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1102604, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0278068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1102629, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0342004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1102600, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.026836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1102600, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.026836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1102629, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0342004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1102643, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.036486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1102643, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.036486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1102548, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0080822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1102600, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.026836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1102548, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0080822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1102816, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0844734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1102643, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.036486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1102816, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0844734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1102710, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.057083, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1102548, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0080822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1102710, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.057083, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1102686, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0480232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1102816, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0844734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1102686, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0480232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1102732, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0610828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1102710, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.057083, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1102732, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0610828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1102673, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.043641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1102686, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0480232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1102673, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.043641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1102759, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.073968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1102732, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0610828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1102759, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.073968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1102733, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0711656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1102673, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.043641, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1102733, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0711656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1102766, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0751002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1102759, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.073968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1102766, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0751002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1102804, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0818784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1102733, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0711656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1102804, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0818784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1102758, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.072083, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1102758, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.072083, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1102766, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0751002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1102729, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0597632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1102729, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0597632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1102804, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0818784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1102702, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.051481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1102702, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.051481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1102758, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.072083, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1102726, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0580828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1102726, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0580828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.199997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1102729, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0597632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1102690, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.05004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1102690, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.05004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1102702, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.051481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1102730, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0605557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1102730, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0605557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1102726, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0580828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1102789, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0811422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1102789, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0811422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1102690, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.05004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1102775, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0784879, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1102775, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0784879, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1102730, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0605557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1102675, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0451844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1102675, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0451844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1102789, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0811422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1102680, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0472114, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1102680, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0472114, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1102754, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.072072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1102775, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0784879, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1102754, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.072072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1102771, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0755792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1102675, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0451844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1102771, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0755792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1102680, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0472114, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1102754, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.072072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1102771, 'dev': 181, 'nlink': 1, 'atime': 1773014549.0, 'mtime': 1773014549.0, 'ctime': 1773015422.0755792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-09 01:13:31.200438 | orchestrator | 2026-03-09 01:13:31.200449 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-09 01:13:31.200460 | orchestrator | Monday 09 March 2026 01:12:10 +0000 (0:00:43.673) 0:01:00.357 ********** 2026-03-09 01:13:31.200531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:13:31.200549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:13:31.200560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-09 01:13:31.200570 | orchestrator | 2026-03-09 01:13:31.200580 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-09 01:13:31.200591 | orchestrator | Monday 09 March 2026 01:12:12 +0000 (0:00:01.439) 0:01:01.796 ********** 2026-03-09 01:13:31.200601 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:13:31.200611 | orchestrator | 2026-03-09 01:13:31.200621 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-09 01:13:31.200631 | orchestrator | Monday 09 March 2026 01:12:14 +0000 (0:00:02.601) 0:01:04.398 ********** 2026-03-09 01:13:31.200641 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:13:31.200651 | orchestrator | 2026-03-09 01:13:31.200661 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-09 01:13:31.200671 | orchestrator | Monday 09 March 2026 01:12:17 +0000 (0:00:02.730) 0:01:07.129 ********** 2026-03-09 01:13:31.200681 | orchestrator | 2026-03-09 01:13:31.200691 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-09 01:13:31.200701 | orchestrator | Monday 09 March 2026 01:12:17 +0000 (0:00:00.129) 0:01:07.258 ********** 2026-03-09 01:13:31.200710 | orchestrator | 2026-03-09 01:13:31.200720 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-09 01:13:31.200738 | orchestrator | Monday 09 March 2026 01:12:17 +0000 (0:00:00.359) 0:01:07.617 ********** 2026-03-09 01:13:31.200748 | orchestrator | 2026-03-09 01:13:31.200758 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-09 01:13:31.200768 | orchestrator | Monday 09 March 2026 01:12:18 +0000 (0:00:00.165) 0:01:07.783 ********** 2026-03-09 01:13:31.200777 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:13:31.200787 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:13:31.200797 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:13:31.200806 | orchestrator | 2026-03-09 01:13:31.200816 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-09 01:13:31.200826 | orchestrator | Monday 09 March 2026 01:12:20 +0000 (0:00:02.196) 0:01:09.979 ********** 2026-03-09 01:13:31.200836 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:13:31.200845 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:13:31.200855 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-09 01:13:31.200866 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-09 01:13:31.200876 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:13:31.200887 | orchestrator | 2026-03-09 01:13:31.200896 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-09 01:13:31.200906 | orchestrator | Monday 09 March 2026 01:12:47 +0000 (0:00:27.208) 0:01:37.188 ********** 2026-03-09 01:13:31.200916 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:13:31.200926 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:13:31.200936 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:13:31.200945 | orchestrator | 2026-03-09 01:13:31.200955 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-09 01:13:31.200965 | orchestrator | Monday 09 March 2026 01:13:22 +0000 (0:00:35.320) 0:02:12.508 ********** 2026-03-09 01:13:31.200975 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:13:31.200985 | orchestrator | 2026-03-09 01:13:31.200995 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-09 01:13:31.201005 | orchestrator | Monday 09 March 2026 01:13:25 +0000 (0:00:02.615) 0:02:15.124 ********** 2026-03-09 01:13:31.201014 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:13:31.201025 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:13:31.201035 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:13:31.201044 | orchestrator | 2026-03-09 01:13:31.201052 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-09 01:13:31.201061 | orchestrator | Monday 09 March 2026 01:13:26 +0000 (0:00:00.622) 0:02:15.746 ********** 2026-03-09 01:13:31.201082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-09 01:13:31.201106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-09 01:13:31.201120 | orchestrator | 2026-03-09 01:13:31.201133 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-09 01:13:31.201147 | orchestrator | Monday 09 March 2026 01:13:28 +0000 (0:00:02.716) 0:02:18.462 ********** 2026-03-09 01:13:31.201160 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:13:31.201172 | orchestrator | 2026-03-09 01:13:31.201186 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:13:31.201200 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:13:31.201235 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:13:31.201244 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:13:31.201252 | orchestrator | 2026-03-09 01:13:31.201261 | orchestrator | 2026-03-09 01:13:31.201268 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:13:31.201276 | orchestrator | Monday 09 March 2026 01:13:29 +0000 (0:00:00.291) 0:02:18.754 ********** 2026-03-09 01:13:31.201285 | orchestrator | =============================================================================== 2026-03-09 01:13:31.201292 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 43.67s 2026-03-09 01:13:31.201300 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 35.32s 2026-03-09 01:13:31.201308 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 27.21s 2026-03-09 01:13:31.201316 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.73s 2026-03-09 01:13:31.201324 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.72s 2026-03-09 01:13:31.201332 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.62s 2026-03-09 01:13:31.201340 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.60s 2026-03-09 01:13:31.201348 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.20s 2026-03-09 01:13:31.201356 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.62s 2026-03-09 01:13:31.201363 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.54s 2026-03-09 01:13:31.201371 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.44s 2026-03-09 01:13:31.201380 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.37s 2026-03-09 01:13:31.201388 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.35s 2026-03-09 01:13:31.201396 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.33s 2026-03-09 01:13:31.201404 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.25s 2026-03-09 01:13:31.201411 | orchestrator | grafana : include_tasks ------------------------------------------------- 1.19s 2026-03-09 01:13:31.201419 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 1.02s 2026-03-09 01:13:31.201427 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.98s 2026-03-09 01:13:31.201435 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.83s 2026-03-09 01:13:31.201443 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.81s 2026-03-09 01:13:31.201451 | orchestrator | 2026-03-09 01:13:31 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:13:31.201459 | orchestrator | 2026-03-09 01:13:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:34.245176 | orchestrator | 2026-03-09 01:13:34 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:13:34.246382 | orchestrator | 2026-03-09 01:13:34 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:13:34.246412 | orchestrator | 2026-03-09 01:13:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:37.294582 | orchestrator | 2026-03-09 01:13:37 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:13:37.294683 | orchestrator | 2026-03-09 01:13:37 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:13:37.294695 | orchestrator | 2026-03-09 01:13:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:40.338240 | orchestrator | 2026-03-09 01:13:40 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:13:40.339001 | orchestrator | 2026-03-09 01:13:40 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:13:40.339031 | orchestrator | 2026-03-09 01:13:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:43.374975 | orchestrator | 2026-03-09 01:13:43 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:13:43.375138 | orchestrator | 2026-03-09 01:13:43 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:13:43.375154 | orchestrator | 2026-03-09 01:13:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:46.417967 | orchestrator | 2026-03-09 01:13:46 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:13:46.420049 | orchestrator | 2026-03-09 01:13:46 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:13:46.420102 | orchestrator | 2026-03-09 01:13:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:49.457940 | orchestrator | 2026-03-09 01:13:49 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:13:49.459030 | orchestrator | 2026-03-09 01:13:49 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:13:49.459061 | orchestrator | 2026-03-09 01:13:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:52.562683 | orchestrator | 2026-03-09 01:13:52 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:13:52.563907 | orchestrator | 2026-03-09 01:13:52 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:13:52.564024 | orchestrator | 2026-03-09 01:13:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:55.601989 | orchestrator | 2026-03-09 01:13:55 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:13:55.602103 | orchestrator | 2026-03-09 01:13:55 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:13:55.602112 | orchestrator | 2026-03-09 01:13:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:13:58.643176 | orchestrator | 2026-03-09 01:13:58 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:13:58.648007 | orchestrator | 2026-03-09 01:13:58 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:13:58.648089 | orchestrator | 2026-03-09 01:13:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:01.735950 | orchestrator | 2026-03-09 01:14:01 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:14:01.737831 | orchestrator | 2026-03-09 01:14:01 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:14:01.738212 | orchestrator | 2026-03-09 01:14:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:04.795773 | orchestrator | 2026-03-09 01:14:04 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:14:04.796693 | orchestrator | 2026-03-09 01:14:04 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:14:04.796743 | orchestrator | 2026-03-09 01:14:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:07.837917 | orchestrator | 2026-03-09 01:14:07 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:14:07.838201 | orchestrator | 2026-03-09 01:14:07 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:14:07.838343 | orchestrator | 2026-03-09 01:14:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:10.868027 | orchestrator | 2026-03-09 01:14:10 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:14:10.869008 | orchestrator | 2026-03-09 01:14:10 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:14:10.869090 | orchestrator | 2026-03-09 01:14:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:13.898005 | orchestrator | 2026-03-09 01:14:13 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:14:13.899086 | orchestrator | 2026-03-09 01:14:13 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:14:13.899192 | orchestrator | 2026-03-09 01:14:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:16.926891 | orchestrator | 2026-03-09 01:14:16 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:14:16.929676 | orchestrator | 2026-03-09 01:14:16 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:14:16.929724 | orchestrator | 2026-03-09 01:14:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:19.959110 | orchestrator | 2026-03-09 01:14:19 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:14:19.963371 | orchestrator | 2026-03-09 01:14:19 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:14:19.963453 | orchestrator | 2026-03-09 01:14:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:23.017912 | orchestrator | 2026-03-09 01:14:23 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:14:23.023430 | orchestrator | 2026-03-09 01:14:23 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:14:23.023561 | orchestrator | 2026-03-09 01:14:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:26.065891 | orchestrator | 2026-03-09 01:14:26 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:14:26.066642 | orchestrator | 2026-03-09 01:14:26 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:14:26.066668 | orchestrator | 2026-03-09 01:14:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:29.116421 | orchestrator | 2026-03-09 01:14:29 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:14:29.117300 | orchestrator | 2026-03-09 01:14:29 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:14:29.117491 | orchestrator | 2026-03-09 01:14:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:32.150977 | orchestrator | 2026-03-09 01:14:32 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:14:32.151527 | orchestrator | 2026-03-09 01:14:32 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:14:32.151548 | orchestrator | 2026-03-09 01:14:32 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:35.182241 | orchestrator | 2026-03-09 01:14:35 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:14:35.182759 | orchestrator | 2026-03-09 01:14:35 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:14:35.182797 | orchestrator | 2026-03-09 01:14:35 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:38.215316 | orchestrator | 2026-03-09 01:14:38 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:14:38.215743 | orchestrator | 2026-03-09 01:14:38 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:14:38.215776 | orchestrator | 2026-03-09 01:14:38 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:41.241986 | orchestrator | 2026-03-09 01:14:41 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:14:41.242480 | orchestrator | 2026-03-09 01:14:41 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:14:41.242513 | orchestrator | 2026-03-09 01:14:41 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:44.308993 | orchestrator | 2026-03-09 01:14:44 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:14:44.309124 | orchestrator | 2026-03-09 01:14:44 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:14:44.309148 | orchestrator | 2026-03-09 01:14:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:47.387684 | orchestrator | 2026-03-09 01:14:47 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:14:47.388514 | orchestrator | 2026-03-09 01:14:47 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:14:47.388554 | orchestrator | 2026-03-09 01:14:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:50.428904 | orchestrator | 2026-03-09 01:14:50 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:14:50.431563 | orchestrator | 2026-03-09 01:14:50 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:14:50.431610 | orchestrator | 2026-03-09 01:14:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:53.471833 | orchestrator | 2026-03-09 01:14:53 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:14:53.473549 | orchestrator | 2026-03-09 01:14:53 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:14:53.473577 | orchestrator | 2026-03-09 01:14:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:56.529618 | orchestrator | 2026-03-09 01:14:56 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:14:56.531936 | orchestrator | 2026-03-09 01:14:56 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:14:56.532041 | orchestrator | 2026-03-09 01:14:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:14:59.565730 | orchestrator | 2026-03-09 01:14:59 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:14:59.565801 | orchestrator | 2026-03-09 01:14:59 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:14:59.565808 | orchestrator | 2026-03-09 01:14:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:02.598151 | orchestrator | 2026-03-09 01:15:02 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:15:02.600876 | orchestrator | 2026-03-09 01:15:02 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:15:02.600943 | orchestrator | 2026-03-09 01:15:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:05.633402 | orchestrator | 2026-03-09 01:15:05 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:15:05.634575 | orchestrator | 2026-03-09 01:15:05 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:15:05.634606 | orchestrator | 2026-03-09 01:15:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:08.674175 | orchestrator | 2026-03-09 01:15:08 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:15:08.674644 | orchestrator | 2026-03-09 01:15:08 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:15:08.674675 | orchestrator | 2026-03-09 01:15:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:11.712518 | orchestrator | 2026-03-09 01:15:11 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:15:11.713764 | orchestrator | 2026-03-09 01:15:11 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:15:11.713823 | orchestrator | 2026-03-09 01:15:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:14.746471 | orchestrator | 2026-03-09 01:15:14 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:15:14.747108 | orchestrator | 2026-03-09 01:15:14 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:15:14.747155 | orchestrator | 2026-03-09 01:15:14 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:17.773843 | orchestrator | 2026-03-09 01:15:17 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:15:17.775058 | orchestrator | 2026-03-09 01:15:17 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:15:17.775109 | orchestrator | 2026-03-09 01:15:17 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:20.816589 | orchestrator | 2026-03-09 01:15:20 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:15:20.818865 | orchestrator | 2026-03-09 01:15:20 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:15:20.818913 | orchestrator | 2026-03-09 01:15:20 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:23.868898 | orchestrator | 2026-03-09 01:15:23 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:15:23.868985 | orchestrator | 2026-03-09 01:15:23 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:15:23.868996 | orchestrator | 2026-03-09 01:15:23 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:26.931256 | orchestrator | 2026-03-09 01:15:26 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:15:26.932709 | orchestrator | 2026-03-09 01:15:26 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:15:26.932754 | orchestrator | 2026-03-09 01:15:26 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:29.976551 | orchestrator | 2026-03-09 01:15:29 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:15:29.977674 | orchestrator | 2026-03-09 01:15:29 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:15:29.977772 | orchestrator | 2026-03-09 01:15:29 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:33.022580 | orchestrator | 2026-03-09 01:15:33 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:15:33.026502 | orchestrator | 2026-03-09 01:15:33 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:15:33.026874 | orchestrator | 2026-03-09 01:15:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:36.063510 | orchestrator | 2026-03-09 01:15:36 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:15:36.064683 | orchestrator | 2026-03-09 01:15:36 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:15:36.064798 | orchestrator | 2026-03-09 01:15:36 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:39.105241 | orchestrator | 2026-03-09 01:15:39 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:15:39.106502 | orchestrator | 2026-03-09 01:15:39 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:15:39.106589 | orchestrator | 2026-03-09 01:15:39 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:42.143005 | orchestrator | 2026-03-09 01:15:42 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:15:42.143817 | orchestrator | 2026-03-09 01:15:42 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:15:42.143872 | orchestrator | 2026-03-09 01:15:42 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:45.191272 | orchestrator | 2026-03-09 01:15:45 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:15:45.192804 | orchestrator | 2026-03-09 01:15:45 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:15:45.192869 | orchestrator | 2026-03-09 01:15:45 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:48.230795 | orchestrator | 2026-03-09 01:15:48 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:15:48.231576 | orchestrator | 2026-03-09 01:15:48 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:15:48.231629 | orchestrator | 2026-03-09 01:15:48 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:51.275164 | orchestrator | 2026-03-09 01:15:51 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:15:51.277312 | orchestrator | 2026-03-09 01:15:51 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:15:51.277361 | orchestrator | 2026-03-09 01:15:51 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:54.326343 | orchestrator | 2026-03-09 01:15:54 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:15:54.327236 | orchestrator | 2026-03-09 01:15:54 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:15:54.327256 | orchestrator | 2026-03-09 01:15:54 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:15:57.368456 | orchestrator | 2026-03-09 01:15:57 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:15:57.369985 | orchestrator | 2026-03-09 01:15:57 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:15:57.370067 | orchestrator | 2026-03-09 01:15:57 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:00.411409 | orchestrator | 2026-03-09 01:16:00 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:16:00.412697 | orchestrator | 2026-03-09 01:16:00 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:16:00.412728 | orchestrator | 2026-03-09 01:16:00 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:03.466279 | orchestrator | 2026-03-09 01:16:03 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:16:03.468994 | orchestrator | 2026-03-09 01:16:03 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:16:03.469204 | orchestrator | 2026-03-09 01:16:03 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:06.524158 | orchestrator | 2026-03-09 01:16:06 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:16:06.525099 | orchestrator | 2026-03-09 01:16:06 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:16:06.525207 | orchestrator | 2026-03-09 01:16:06 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:09.566112 | orchestrator | 2026-03-09 01:16:09 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:16:09.566958 | orchestrator | 2026-03-09 01:16:09 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:16:09.567051 | orchestrator | 2026-03-09 01:16:09 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:12.615554 | orchestrator | 2026-03-09 01:16:12 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:16:12.616755 | orchestrator | 2026-03-09 01:16:12 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:16:12.616810 | orchestrator | 2026-03-09 01:16:12 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:15.665958 | orchestrator | 2026-03-09 01:16:15 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:16:15.667082 | orchestrator | 2026-03-09 01:16:15 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:16:15.668396 | orchestrator | 2026-03-09 01:16:15 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:18.711907 | orchestrator | 2026-03-09 01:16:18 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:16:18.715118 | orchestrator | 2026-03-09 01:16:18 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:16:18.715218 | orchestrator | 2026-03-09 01:16:18 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:21.780737 | orchestrator | 2026-03-09 01:16:21 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:16:21.780807 | orchestrator | 2026-03-09 01:16:21 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:16:21.780815 | orchestrator | 2026-03-09 01:16:21 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:24.821337 | orchestrator | 2026-03-09 01:16:24 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:16:24.823499 | orchestrator | 2026-03-09 01:16:24 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:16:24.823561 | orchestrator | 2026-03-09 01:16:24 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:27.866145 | orchestrator | 2026-03-09 01:16:27 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:16:27.868201 | orchestrator | 2026-03-09 01:16:27 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:16:27.868566 | orchestrator | 2026-03-09 01:16:27 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:30.913547 | orchestrator | 2026-03-09 01:16:30 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:16:30.915262 | orchestrator | 2026-03-09 01:16:30 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:16:30.915313 | orchestrator | 2026-03-09 01:16:30 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:33.976948 | orchestrator | 2026-03-09 01:16:33 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:16:33.978552 | orchestrator | 2026-03-09 01:16:33 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:16:33.978595 | orchestrator | 2026-03-09 01:16:33 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:37.023224 | orchestrator | 2026-03-09 01:16:37 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:16:37.024213 | orchestrator | 2026-03-09 01:16:37 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:16:37.024238 | orchestrator | 2026-03-09 01:16:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:40.087466 | orchestrator | 2026-03-09 01:16:40 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:16:40.088789 | orchestrator | 2026-03-09 01:16:40 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:16:40.089150 | orchestrator | 2026-03-09 01:16:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:43.122775 | orchestrator | 2026-03-09 01:16:43 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:16:43.124514 | orchestrator | 2026-03-09 01:16:43 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:16:43.124588 | orchestrator | 2026-03-09 01:16:43 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:46.164686 | orchestrator | 2026-03-09 01:16:46 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:16:46.166250 | orchestrator | 2026-03-09 01:16:46 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:16:46.166289 | orchestrator | 2026-03-09 01:16:46 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:49.203476 | orchestrator | 2026-03-09 01:16:49 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:16:49.205133 | orchestrator | 2026-03-09 01:16:49 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:16:49.205165 | orchestrator | 2026-03-09 01:16:49 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:52.245265 | orchestrator | 2026-03-09 01:16:52 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:16:52.248813 | orchestrator | 2026-03-09 01:16:52 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:16:52.248853 | orchestrator | 2026-03-09 01:16:52 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:55.285876 | orchestrator | 2026-03-09 01:16:55 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:16:55.287130 | orchestrator | 2026-03-09 01:16:55 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:16:55.287157 | orchestrator | 2026-03-09 01:16:55 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:16:58.319892 | orchestrator | 2026-03-09 01:16:58 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:16:58.320017 | orchestrator | 2026-03-09 01:16:58 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:16:58.320044 | orchestrator | 2026-03-09 01:16:58 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:01.374080 | orchestrator | 2026-03-09 01:17:01 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:17:01.374180 | orchestrator | 2026-03-09 01:17:01 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:17:01.374198 | orchestrator | 2026-03-09 01:17:01 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:04.418124 | orchestrator | 2026-03-09 01:17:04 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:17:04.418820 | orchestrator | 2026-03-09 01:17:04 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:17:04.418925 | orchestrator | 2026-03-09 01:17:04 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:07.510967 | orchestrator | 2026-03-09 01:17:07 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:17:07.511704 | orchestrator | 2026-03-09 01:17:07 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:17:07.511940 | orchestrator | 2026-03-09 01:17:07 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:10.567972 | orchestrator | 2026-03-09 01:17:10 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:17:10.571764 | orchestrator | 2026-03-09 01:17:10 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:17:10.572794 | orchestrator | 2026-03-09 01:17:10 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:13.635108 | orchestrator | 2026-03-09 01:17:13 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:17:13.636204 | orchestrator | 2026-03-09 01:17:13 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:17:13.636247 | orchestrator | 2026-03-09 01:17:13 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:16.684223 | orchestrator | 2026-03-09 01:17:16 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:17:16.684723 | orchestrator | 2026-03-09 01:17:16 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:17:16.684752 | orchestrator | 2026-03-09 01:17:16 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:19.712869 | orchestrator | 2026-03-09 01:17:19 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:17:19.713116 | orchestrator | 2026-03-09 01:17:19 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:17:19.713147 | orchestrator | 2026-03-09 01:17:19 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:22.747314 | orchestrator | 2026-03-09 01:17:22 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:17:22.748967 | orchestrator | 2026-03-09 01:17:22 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:17:22.749003 | orchestrator | 2026-03-09 01:17:22 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:25.787852 | orchestrator | 2026-03-09 01:17:25 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:17:25.789291 | orchestrator | 2026-03-09 01:17:25 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:17:25.789470 | orchestrator | 2026-03-09 01:17:25 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:28.827524 | orchestrator | 2026-03-09 01:17:28 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:17:28.828913 | orchestrator | 2026-03-09 01:17:28 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:17:28.828961 | orchestrator | 2026-03-09 01:17:28 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:31.867157 | orchestrator | 2026-03-09 01:17:31 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:17:31.868334 | orchestrator | 2026-03-09 01:17:31 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:17:31.868375 | orchestrator | 2026-03-09 01:17:31 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:34.915806 | orchestrator | 2026-03-09 01:17:34 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:17:34.917451 | orchestrator | 2026-03-09 01:17:34 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:17:34.917566 | orchestrator | 2026-03-09 01:17:34 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:37.961587 | orchestrator | 2026-03-09 01:17:37 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:17:37.964209 | orchestrator | 2026-03-09 01:17:37 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:17:37.964644 | orchestrator | 2026-03-09 01:17:37 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:40.998807 | orchestrator | 2026-03-09 01:17:40 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:17:40.999644 | orchestrator | 2026-03-09 01:17:40 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:17:40.999676 | orchestrator | 2026-03-09 01:17:40 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:44.052999 | orchestrator | 2026-03-09 01:17:44 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:17:44.054711 | orchestrator | 2026-03-09 01:17:44 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:17:44.054784 | orchestrator | 2026-03-09 01:17:44 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:47.086578 | orchestrator | 2026-03-09 01:17:47 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:17:47.087307 | orchestrator | 2026-03-09 01:17:47 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:17:47.087347 | orchestrator | 2026-03-09 01:17:47 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:50.126777 | orchestrator | 2026-03-09 01:17:50 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:17:50.128453 | orchestrator | 2026-03-09 01:17:50 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:17:50.128510 | orchestrator | 2026-03-09 01:17:50 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:53.163709 | orchestrator | 2026-03-09 01:17:53 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state STARTED 2026-03-09 01:17:53.164332 | orchestrator | 2026-03-09 01:17:53 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:17:53.164483 | orchestrator | 2026-03-09 01:17:53 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:56.216887 | orchestrator | 2026-03-09 01:17:56 | INFO  | Task 9cf74d09-f409-4a88-8fcc-d0a0321c1000 is in state SUCCESS 2026-03-09 01:17:56.218610 | orchestrator | 2026-03-09 01:17:56.218877 | orchestrator | 2026-03-09 01:17:56.218917 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:17:56.218940 | orchestrator | 2026-03-09 01:17:56.218958 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-09 01:17:56.218978 | orchestrator | Monday 09 March 2026 01:08:13 +0000 (0:00:00.387) 0:00:00.387 ********** 2026-03-09 01:17:56.219108 | orchestrator | changed: [testbed-manager] 2026-03-09 01:17:56.219124 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:56.219135 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:17:56.219163 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:17:56.219177 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:17:56.219190 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:17:56.219203 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:17:56.219216 | orchestrator | 2026-03-09 01:17:56.219228 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:17:56.219241 | orchestrator | Monday 09 March 2026 01:08:14 +0000 (0:00:01.175) 0:00:01.563 ********** 2026-03-09 01:17:56.219254 | orchestrator | changed: [testbed-manager] 2026-03-09 01:17:56.219290 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:56.219302 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:17:56.219315 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:17:56.219328 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:17:56.219341 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:17:56.219352 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:17:56.219363 | orchestrator | 2026-03-09 01:17:56.219374 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:17:56.219584 | orchestrator | Monday 09 March 2026 01:08:14 +0000 (0:00:00.671) 0:00:02.234 ********** 2026-03-09 01:17:56.219643 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-09 01:17:56.219656 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-09 01:17:56.219667 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-09 01:17:56.219679 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-09 01:17:56.219690 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-09 01:17:56.219700 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-09 01:17:56.219711 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-09 01:17:56.219755 | orchestrator | 2026-03-09 01:17:56.219767 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-09 01:17:56.219778 | orchestrator | 2026-03-09 01:17:56.219789 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-09 01:17:56.219799 | orchestrator | Monday 09 March 2026 01:08:15 +0000 (0:00:00.944) 0:00:03.178 ********** 2026-03-09 01:17:56.219810 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:17:56.219821 | orchestrator | 2026-03-09 01:17:56.219831 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-09 01:17:56.219842 | orchestrator | Monday 09 March 2026 01:08:16 +0000 (0:00:00.718) 0:00:03.896 ********** 2026-03-09 01:17:56.219854 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-09 01:17:56.219865 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-09 01:17:56.219876 | orchestrator | 2026-03-09 01:17:56.219887 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-09 01:17:56.219898 | orchestrator | Monday 09 March 2026 01:08:21 +0000 (0:00:04.682) 0:00:08.579 ********** 2026-03-09 01:17:56.219909 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 01:17:56.219920 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-09 01:17:56.219931 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:56.219942 | orchestrator | 2026-03-09 01:17:56.219952 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-09 01:17:56.220001 | orchestrator | Monday 09 March 2026 01:08:26 +0000 (0:00:04.716) 0:00:13.296 ********** 2026-03-09 01:17:56.220013 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:56.220023 | orchestrator | 2026-03-09 01:17:56.220032 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-09 01:17:56.220042 | orchestrator | Monday 09 March 2026 01:08:26 +0000 (0:00:00.725) 0:00:14.021 ********** 2026-03-09 01:17:56.220051 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:56.220061 | orchestrator | 2026-03-09 01:17:56.220070 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-09 01:17:56.220080 | orchestrator | Monday 09 March 2026 01:08:28 +0000 (0:00:02.040) 0:00:16.061 ********** 2026-03-09 01:17:56.220090 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:56.220099 | orchestrator | 2026-03-09 01:17:56.220109 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-09 01:17:56.220118 | orchestrator | Monday 09 March 2026 01:08:37 +0000 (0:00:08.692) 0:00:24.754 ********** 2026-03-09 01:17:56.220128 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.220137 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.220171 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.220194 | orchestrator | 2026-03-09 01:17:56.220204 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-09 01:17:56.220214 | orchestrator | Monday 09 March 2026 01:08:38 +0000 (0:00:00.584) 0:00:25.338 ********** 2026-03-09 01:17:56.220224 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:17:56.220234 | orchestrator | 2026-03-09 01:17:56.220243 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-09 01:17:56.220253 | orchestrator | Monday 09 March 2026 01:09:14 +0000 (0:00:36.226) 0:01:01.565 ********** 2026-03-09 01:17:56.220263 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:56.220272 | orchestrator | 2026-03-09 01:17:56.220282 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-09 01:17:56.220292 | orchestrator | Monday 09 March 2026 01:09:30 +0000 (0:00:15.833) 0:01:17.399 ********** 2026-03-09 01:17:56.220302 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:17:56.220311 | orchestrator | 2026-03-09 01:17:56.220322 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-09 01:17:56.220332 | orchestrator | Monday 09 March 2026 01:09:43 +0000 (0:00:12.859) 0:01:30.258 ********** 2026-03-09 01:17:56.220364 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:17:56.220374 | orchestrator | 2026-03-09 01:17:56.220437 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-09 01:17:56.220450 | orchestrator | Monday 09 March 2026 01:09:44 +0000 (0:00:01.965) 0:01:32.223 ********** 2026-03-09 01:17:56.220459 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.220469 | orchestrator | 2026-03-09 01:17:56.220479 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-09 01:17:56.220490 | orchestrator | Monday 09 March 2026 01:09:45 +0000 (0:00:00.692) 0:01:32.916 ********** 2026-03-09 01:17:56.220515 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:17:56.220544 | orchestrator | 2026-03-09 01:17:56.220560 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-09 01:17:56.220575 | orchestrator | Monday 09 March 2026 01:09:46 +0000 (0:00:00.934) 0:01:33.850 ********** 2026-03-09 01:17:56.220590 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:17:56.220606 | orchestrator | 2026-03-09 01:17:56.220620 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-09 01:17:56.220634 | orchestrator | Monday 09 March 2026 01:10:05 +0000 (0:00:18.659) 0:01:52.509 ********** 2026-03-09 01:17:56.220650 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.220665 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.220682 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.220696 | orchestrator | 2026-03-09 01:17:56.220713 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-09 01:17:56.220731 | orchestrator | 2026-03-09 01:17:56.220748 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-09 01:17:56.220764 | orchestrator | Monday 09 March 2026 01:10:05 +0000 (0:00:00.403) 0:01:52.913 ********** 2026-03-09 01:17:56.220779 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:17:56.220796 | orchestrator | 2026-03-09 01:17:56.220812 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-09 01:17:56.220829 | orchestrator | Monday 09 March 2026 01:10:06 +0000 (0:00:00.644) 0:01:53.557 ********** 2026-03-09 01:17:56.220844 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.220861 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.220874 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:56.220884 | orchestrator | 2026-03-09 01:17:56.220894 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-09 01:17:56.220903 | orchestrator | Monday 09 March 2026 01:10:08 +0000 (0:00:01.991) 0:01:55.548 ********** 2026-03-09 01:17:56.220913 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.220922 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.220932 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:56.220953 | orchestrator | 2026-03-09 01:17:56.220963 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-09 01:17:56.220972 | orchestrator | Monday 09 March 2026 01:10:10 +0000 (0:00:02.016) 0:01:57.565 ********** 2026-03-09 01:17:56.220982 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.220992 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.221001 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.221011 | orchestrator | 2026-03-09 01:17:56.221020 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-09 01:17:56.221030 | orchestrator | Monday 09 March 2026 01:10:10 +0000 (0:00:00.386) 0:01:57.951 ********** 2026-03-09 01:17:56.221040 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-09 01:17:56.221050 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.221059 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-09 01:17:56.221069 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.221079 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-09 01:17:56.221088 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-09 01:17:56.221098 | orchestrator | 2026-03-09 01:17:56.221108 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-09 01:17:56.221117 | orchestrator | Monday 09 March 2026 01:10:20 +0000 (0:00:09.946) 0:02:07.898 ********** 2026-03-09 01:17:56.221127 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.221137 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.221146 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.221156 | orchestrator | 2026-03-09 01:17:56.221165 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-09 01:17:56.221175 | orchestrator | Monday 09 March 2026 01:10:21 +0000 (0:00:00.404) 0:02:08.302 ********** 2026-03-09 01:17:56.221185 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-09 01:17:56.221194 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.221204 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-09 01:17:56.221214 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.221223 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-09 01:17:56.221233 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.221243 | orchestrator | 2026-03-09 01:17:56.221252 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-09 01:17:56.221262 | orchestrator | Monday 09 March 2026 01:10:21 +0000 (0:00:00.731) 0:02:09.034 ********** 2026-03-09 01:17:56.221272 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.221281 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.221291 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:56.221300 | orchestrator | 2026-03-09 01:17:56.221310 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-09 01:17:56.221320 | orchestrator | Monday 09 March 2026 01:10:22 +0000 (0:00:00.740) 0:02:09.774 ********** 2026-03-09 01:17:56.221350 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.221360 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.221370 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:56.221380 | orchestrator | 2026-03-09 01:17:56.221420 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-09 01:17:56.221431 | orchestrator | Monday 09 March 2026 01:10:23 +0000 (0:00:01.069) 0:02:10.844 ********** 2026-03-09 01:17:56.221441 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.221451 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.221473 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:56.221483 | orchestrator | 2026-03-09 01:17:56.221499 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-09 01:17:56.221515 | orchestrator | Monday 09 March 2026 01:10:25 +0000 (0:00:02.295) 0:02:13.140 ********** 2026-03-09 01:17:56.221531 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.221547 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.221575 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:17:56.221591 | orchestrator | 2026-03-09 01:17:56.221611 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-09 01:17:56.221621 | orchestrator | Monday 09 March 2026 01:10:50 +0000 (0:00:24.281) 0:02:37.421 ********** 2026-03-09 01:17:56.221631 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.221640 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.221650 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:17:56.221659 | orchestrator | 2026-03-09 01:17:56.221669 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-09 01:17:56.221679 | orchestrator | Monday 09 March 2026 01:11:07 +0000 (0:00:17.480) 0:02:54.902 ********** 2026-03-09 01:17:56.221688 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:17:56.221698 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.221708 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.221717 | orchestrator | 2026-03-09 01:17:56.221727 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-09 01:17:56.221736 | orchestrator | Monday 09 March 2026 01:11:08 +0000 (0:00:01.153) 0:02:56.056 ********** 2026-03-09 01:17:56.221746 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.221755 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.221765 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:56.221775 | orchestrator | 2026-03-09 01:17:56.221785 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-09 01:17:56.221795 | orchestrator | Monday 09 March 2026 01:11:23 +0000 (0:00:14.733) 0:03:10.789 ********** 2026-03-09 01:17:56.221804 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.221814 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.221824 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.221834 | orchestrator | 2026-03-09 01:17:56.221844 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-09 01:17:56.221854 | orchestrator | Monday 09 March 2026 01:11:24 +0000 (0:00:01.363) 0:03:12.153 ********** 2026-03-09 01:17:56.221863 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.221873 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.221882 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.221892 | orchestrator | 2026-03-09 01:17:56.221901 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-09 01:17:56.221911 | orchestrator | 2026-03-09 01:17:56.221921 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-09 01:17:56.221930 | orchestrator | Monday 09 March 2026 01:11:25 +0000 (0:00:00.675) 0:03:12.829 ********** 2026-03-09 01:17:56.221940 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:17:56.221950 | orchestrator | 2026-03-09 01:17:56.221960 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-09 01:17:56.221970 | orchestrator | Monday 09 March 2026 01:11:26 +0000 (0:00:00.712) 0:03:13.541 ********** 2026-03-09 01:17:56.221979 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-09 01:17:56.221989 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-09 01:17:56.221999 | orchestrator | 2026-03-09 01:17:56.222008 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-09 01:17:56.222076 | orchestrator | Monday 09 March 2026 01:11:30 +0000 (0:00:03.929) 0:03:17.471 ********** 2026-03-09 01:17:56.222089 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-09 01:17:56.222115 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-09 01:17:56.222125 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-09 01:17:56.222135 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-09 01:17:56.222152 | orchestrator | 2026-03-09 01:17:56.222162 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-09 01:17:56.222182 | orchestrator | Monday 09 March 2026 01:11:37 +0000 (0:00:07.669) 0:03:25.140 ********** 2026-03-09 01:17:56.222192 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-09 01:17:56.222202 | orchestrator | 2026-03-09 01:17:56.222212 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-09 01:17:56.222221 | orchestrator | Monday 09 March 2026 01:11:41 +0000 (0:00:03.585) 0:03:28.726 ********** 2026-03-09 01:17:56.222231 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-09 01:17:56.222241 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:17:56.222250 | orchestrator | 2026-03-09 01:17:56.222260 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-09 01:17:56.222270 | orchestrator | Monday 09 March 2026 01:11:45 +0000 (0:00:04.471) 0:03:33.198 ********** 2026-03-09 01:17:56.222279 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:17:56.222289 | orchestrator | 2026-03-09 01:17:56.222299 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-09 01:17:56.222308 | orchestrator | Monday 09 March 2026 01:11:49 +0000 (0:00:03.484) 0:03:36.683 ********** 2026-03-09 01:17:56.222318 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-09 01:17:56.222328 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-09 01:17:56.222337 | orchestrator | 2026-03-09 01:17:56.222347 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-09 01:17:56.222366 | orchestrator | Monday 09 March 2026 01:11:57 +0000 (0:00:08.191) 0:03:44.875 ********** 2026-03-09 01:17:56.222411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:56.222429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:56.222449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:56.222472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.222499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.222517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.222560 | orchestrator | 2026-03-09 01:17:56.222581 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-09 01:17:56.222597 | orchestrator | Monday 09 March 2026 01:11:59 +0000 (0:00:01.638) 0:03:46.514 ********** 2026-03-09 01:17:56.222612 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.222627 | orchestrator | 2026-03-09 01:17:56.222642 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-09 01:17:56.222657 | orchestrator | Monday 09 March 2026 01:11:59 +0000 (0:00:00.156) 0:03:46.670 ********** 2026-03-09 01:17:56.222672 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.222688 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.222704 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.222730 | orchestrator | 2026-03-09 01:17:56.222745 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-09 01:17:56.222761 | orchestrator | Monday 09 March 2026 01:12:00 +0000 (0:00:00.606) 0:03:47.277 ********** 2026-03-09 01:17:56.222776 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-09 01:17:56.222791 | orchestrator | 2026-03-09 01:17:56.222809 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-09 01:17:56.222825 | orchestrator | Monday 09 March 2026 01:12:00 +0000 (0:00:00.903) 0:03:48.180 ********** 2026-03-09 01:17:56.222841 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.222857 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.222872 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.222887 | orchestrator | 2026-03-09 01:17:56.222903 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-09 01:17:56.222919 | orchestrator | Monday 09 March 2026 01:12:01 +0000 (0:00:00.342) 0:03:48.523 ********** 2026-03-09 01:17:56.222935 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:17:56.222952 | orchestrator | 2026-03-09 01:17:56.222967 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-09 01:17:56.222984 | orchestrator | Monday 09 March 2026 01:12:01 +0000 (0:00:00.651) 0:03:49.174 ********** 2026-03-09 01:17:56.223003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:56.223077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:56.223172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:56.223228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.223249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.223278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.223295 | orchestrator | 2026-03-09 01:17:56.223311 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-09 01:17:56.223328 | orchestrator | Monday 09 March 2026 01:12:05 +0000 (0:00:03.210) 0:03:52.385 ********** 2026-03-09 01:17:56.223360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-09 01:17:56.223496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.223519 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.223540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-09 01:17:56.223558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.223573 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.223603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-09 01:17:56.223622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.223690 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.223699 | orchestrator | 2026-03-09 01:17:56.223707 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-09 01:17:56.223715 | orchestrator | Monday 09 March 2026 01:12:05 +0000 (0:00:00.809) 0:03:53.194 ********** 2026-03-09 01:17:56.223724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-09 01:17:56.223733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.223741 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.223762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-09 01:17:56.223794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.223803 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.223811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-09 01:17:56.223820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.223829 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.223837 | orchestrator | 2026-03-09 01:17:56.223845 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-09 01:17:56.223854 | orchestrator | Monday 09 March 2026 01:12:06 +0000 (0:00:00.922) 0:03:54.116 ********** 2026-03-09 01:17:56.223872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:56.223891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:56.223900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:56.223909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.223929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.223963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.223988 | orchestrator | 2026-03-09 01:17:56.224002 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-09 01:17:56.224014 | orchestrator | Monday 09 March 2026 01:12:09 +0000 (0:00:02.743) 0:03:56.859 ********** 2026-03-09 01:17:56.224022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:56.224032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:56.224051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:56.224067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.224076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.224085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.224093 | orchestrator | 2026-03-09 01:17:56.224101 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-09 01:17:56.224109 | orchestrator | Monday 09 March 2026 01:12:16 +0000 (0:00:07.123) 0:04:03.983 ********** 2026-03-09 01:17:56.224117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-09 01:17:56.224130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.224144 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.224156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-09 01:17:56.224165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.224174 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.224183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-09 01:17:56.224191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.224206 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.224214 | orchestrator | 2026-03-09 01:17:56.224222 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-09 01:17:56.224230 | orchestrator | Monday 09 March 2026 01:12:17 +0000 (0:00:00.724) 0:04:04.707 ********** 2026-03-09 01:17:56.224238 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:56.224246 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:17:56.224254 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:17:56.224262 | orchestrator | 2026-03-09 01:17:56.224275 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-09 01:17:56.224283 | orchestrator | Monday 09 March 2026 01:12:19 +0000 (0:00:01.819) 0:04:06.527 ********** 2026-03-09 01:17:56.224291 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.224299 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.224307 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.224315 | orchestrator | 2026-03-09 01:17:56.224323 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-09 01:17:56.224338 | orchestrator | Monday 09 March 2026 01:12:19 +0000 (0:00:00.374) 0:04:06.901 ********** 2026-03-09 01:17:56.224347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:56.224356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:56.224370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-09 01:17:56.224414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.224425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.224434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.224442 | orchestrator | 2026-03-09 01:17:56.224450 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-09 01:17:56.224458 | orchestrator | Monday 09 March 2026 01:12:21 +0000 (0:00:02.290) 0:04:09.192 ********** 2026-03-09 01:17:56.224466 | orchestrator | 2026-03-09 01:17:56.224474 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-09 01:17:56.224482 | orchestrator | Monday 09 March 2026 01:12:22 +0000 (0:00:00.163) 0:04:09.355 ********** 2026-03-09 01:17:56.224490 | orchestrator | 2026-03-09 01:17:56.224498 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-09 01:17:56.224506 | orchestrator | Monday 09 March 2026 01:12:22 +0000 (0:00:00.143) 0:04:09.499 ********** 2026-03-09 01:17:56.224527 | orchestrator | 2026-03-09 01:17:56.224535 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-09 01:17:56.224543 | orchestrator | Monday 09 March 2026 01:12:22 +0000 (0:00:00.147) 0:04:09.647 ********** 2026-03-09 01:17:56.224551 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:56.224559 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:17:56.224567 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:17:56.224575 | orchestrator | 2026-03-09 01:17:56.224583 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-09 01:17:56.224591 | orchestrator | Monday 09 March 2026 01:12:41 +0000 (0:00:19.032) 0:04:28.679 ********** 2026-03-09 01:17:56.224605 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:56.224614 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:17:56.224622 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:17:56.224630 | orchestrator | 2026-03-09 01:17:56.224638 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-09 01:17:56.224646 | orchestrator | 2026-03-09 01:17:56.224654 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-09 01:17:56.224662 | orchestrator | Monday 09 March 2026 01:12:48 +0000 (0:00:07.012) 0:04:35.692 ********** 2026-03-09 01:17:56.224670 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:17:56.224678 | orchestrator | 2026-03-09 01:17:56.224686 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-09 01:17:56.224695 | orchestrator | Monday 09 March 2026 01:12:50 +0000 (0:00:01.584) 0:04:37.277 ********** 2026-03-09 01:17:56.224703 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:56.224711 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:56.224719 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:56.224727 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.224734 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.224742 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.224750 | orchestrator | 2026-03-09 01:17:56.224758 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-09 01:17:56.224766 | orchestrator | Monday 09 March 2026 01:12:51 +0000 (0:00:01.010) 0:04:38.288 ********** 2026-03-09 01:17:56.224774 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.224782 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.224790 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.224798 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:17:56.224806 | orchestrator | 2026-03-09 01:17:56.224814 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-09 01:17:56.224828 | orchestrator | Monday 09 March 2026 01:12:52 +0000 (0:00:01.319) 0:04:39.608 ********** 2026-03-09 01:17:56.224837 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-09 01:17:56.224845 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-09 01:17:56.224853 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-09 01:17:56.224861 | orchestrator | 2026-03-09 01:17:56.224869 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-09 01:17:56.224877 | orchestrator | Monday 09 March 2026 01:12:53 +0000 (0:00:00.773) 0:04:40.381 ********** 2026-03-09 01:17:56.224890 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-09 01:17:56.224898 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-09 01:17:56.224906 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-09 01:17:56.224914 | orchestrator | 2026-03-09 01:17:56.224926 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-09 01:17:56.224940 | orchestrator | Monday 09 March 2026 01:12:54 +0000 (0:00:01.513) 0:04:41.894 ********** 2026-03-09 01:17:56.224952 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-09 01:17:56.224966 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:56.224979 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-09 01:17:56.224991 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:56.225003 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-09 01:17:56.225017 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:56.225030 | orchestrator | 2026-03-09 01:17:56.225043 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-09 01:17:56.225056 | orchestrator | Monday 09 March 2026 01:12:55 +0000 (0:00:00.643) 0:04:42.537 ********** 2026-03-09 01:17:56.225070 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 01:17:56.225093 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 01:17:56.225107 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.225115 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 01:17:56.225123 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 01:17:56.225131 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-09 01:17:56.225139 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.225147 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-09 01:17:56.225155 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-09 01:17:56.225163 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-09 01:17:56.225171 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-09 01:17:56.225179 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.225186 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-09 01:17:56.225194 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-09 01:17:56.225202 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-09 01:17:56.225210 | orchestrator | 2026-03-09 01:17:56.225219 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-09 01:17:56.225227 | orchestrator | Monday 09 March 2026 01:12:56 +0000 (0:00:01.509) 0:04:44.046 ********** 2026-03-09 01:17:56.225235 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.225243 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.225250 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.225258 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:17:56.225266 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:17:56.225274 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:17:56.225281 | orchestrator | 2026-03-09 01:17:56.225289 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-09 01:17:56.225297 | orchestrator | Monday 09 March 2026 01:12:58 +0000 (0:00:01.296) 0:04:45.343 ********** 2026-03-09 01:17:56.225305 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.225313 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.225321 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.225329 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:17:56.225337 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:17:56.225345 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:17:56.225353 | orchestrator | 2026-03-09 01:17:56.225360 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-09 01:17:56.225368 | orchestrator | Monday 09 March 2026 01:13:00 +0000 (0:00:02.129) 0:04:47.472 ********** 2026-03-09 01:17:56.225377 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225431 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225448 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225458 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225474 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225513 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225535 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225564 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225629 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225655 | orchestrator | 2026-03-09 01:17:56.225663 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-09 01:17:56.225672 | orchestrator | Monday 09 March 2026 01:13:02 +0000 (0:00:02.533) 0:04:50.005 ********** 2026-03-09 01:17:56.225680 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:17:56.225690 | orchestrator | 2026-03-09 01:17:56.225698 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-09 01:17:56.225706 | orchestrator | Monday 09 March 2026 01:13:04 +0000 (0:00:01.477) 0:04:51.483 ********** 2026-03-09 01:17:56.225714 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225723 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225749 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225805 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225820 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225851 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225887 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225896 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225922 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.225935 | orchestrator | 2026-03-09 01:17:56.225943 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-09 01:17:56.225951 | orchestrator | Monday 09 March 2026 01:13:08 +0000 (0:00:04.007) 0:04:55.490 ********** 2026-03-09 01:17:56.226351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:17:56.226374 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:17:56.226438 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.226452 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:56.226461 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:17:56.226470 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:17:56.226495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.226504 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:56.226518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:17:56.226529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:17:56.226544 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.226560 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:56.226581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:17:56.226605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:17:56.226627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.226647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.226661 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.226676 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.226689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:17:56.226702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.226715 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.226729 | orchestrator | 2026-03-09 01:17:56.226743 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-09 01:17:56.226758 | orchestrator | Monday 09 March 2026 01:13:10 +0000 (0:00:02.017) 0:04:57.508 ********** 2026-03-09 01:17:56.226772 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:17:56.226794 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:17:56.226814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.226828 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:56.226850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:17:56.226866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:17:56.226880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.226907 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:56.226920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:17:56.226935 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:17:56.226964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.226979 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:56.226988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:17:56.226996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.227004 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.227018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:17:56.227026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.227034 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.227041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:17:56.227053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.227060 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.227067 | orchestrator | 2026-03-09 01:17:56.227074 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-09 01:17:56.227085 | orchestrator | Monday 09 March 2026 01:13:13 +0000 (0:00:02.766) 0:05:00.275 ********** 2026-03-09 01:17:56.227092 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.227099 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.227106 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.227113 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-09 01:17:56.227120 | orchestrator | 2026-03-09 01:17:56.227127 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-09 01:17:56.227133 | orchestrator | Monday 09 March 2026 01:13:14 +0000 (0:00:01.235) 0:05:01.511 ********** 2026-03-09 01:17:56.227140 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-09 01:17:56.227147 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-09 01:17:56.227154 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-09 01:17:56.227160 | orchestrator | 2026-03-09 01:17:56.227167 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-09 01:17:56.227174 | orchestrator | Monday 09 March 2026 01:13:15 +0000 (0:00:01.355) 0:05:02.866 ********** 2026-03-09 01:17:56.227181 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-09 01:17:56.227188 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-09 01:17:56.227195 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-09 01:17:56.227206 | orchestrator | 2026-03-09 01:17:56.227213 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-09 01:17:56.227219 | orchestrator | Monday 09 March 2026 01:13:16 +0000 (0:00:01.174) 0:05:04.040 ********** 2026-03-09 01:17:56.227229 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:17:56.227240 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:17:56.227251 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:17:56.227261 | orchestrator | 2026-03-09 01:17:56.227270 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-09 01:17:56.227280 | orchestrator | Monday 09 March 2026 01:13:17 +0000 (0:00:00.576) 0:05:04.617 ********** 2026-03-09 01:17:56.227290 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:17:56.227300 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:17:56.227310 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:17:56.227321 | orchestrator | 2026-03-09 01:17:56.227332 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-09 01:17:56.227342 | orchestrator | Monday 09 March 2026 01:13:18 +0000 (0:00:00.933) 0:05:05.551 ********** 2026-03-09 01:17:56.227354 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-09 01:17:56.227365 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-09 01:17:56.227376 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-09 01:17:56.227409 | orchestrator | 2026-03-09 01:17:56.227421 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-09 01:17:56.227432 | orchestrator | Monday 09 March 2026 01:13:19 +0000 (0:00:01.179) 0:05:06.731 ********** 2026-03-09 01:17:56.227444 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-09 01:17:56.227454 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-09 01:17:56.227465 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-09 01:17:56.227475 | orchestrator | 2026-03-09 01:17:56.227483 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-09 01:17:56.227489 | orchestrator | Monday 09 March 2026 01:13:20 +0000 (0:00:01.274) 0:05:08.005 ********** 2026-03-09 01:17:56.227496 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-09 01:17:56.227503 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-09 01:17:56.227510 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-09 01:17:56.227516 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-09 01:17:56.227523 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-09 01:17:56.227530 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-09 01:17:56.227536 | orchestrator | 2026-03-09 01:17:56.227543 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-09 01:17:56.227549 | orchestrator | Monday 09 March 2026 01:13:25 +0000 (0:00:04.245) 0:05:12.250 ********** 2026-03-09 01:17:56.227556 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:56.227563 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:56.227569 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:56.227576 | orchestrator | 2026-03-09 01:17:56.227582 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-09 01:17:56.227589 | orchestrator | Monday 09 March 2026 01:13:25 +0000 (0:00:00.660) 0:05:12.911 ********** 2026-03-09 01:17:56.227595 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:56.227602 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:56.227609 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:56.227615 | orchestrator | 2026-03-09 01:17:56.227622 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-09 01:17:56.227628 | orchestrator | Monday 09 March 2026 01:13:26 +0000 (0:00:00.338) 0:05:13.251 ********** 2026-03-09 01:17:56.227635 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:17:56.227642 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:17:56.227648 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:17:56.227662 | orchestrator | 2026-03-09 01:17:56.227669 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-09 01:17:56.227676 | orchestrator | Monday 09 March 2026 01:13:27 +0000 (0:00:01.303) 0:05:14.555 ********** 2026-03-09 01:17:56.227690 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-09 01:17:56.227699 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-09 01:17:56.227710 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-09 01:17:56.227732 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-09 01:17:56.227744 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-09 01:17:56.227755 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-09 01:17:56.227766 | orchestrator | 2026-03-09 01:17:56.227777 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-09 01:17:56.227789 | orchestrator | Monday 09 March 2026 01:13:31 +0000 (0:00:03.851) 0:05:18.406 ********** 2026-03-09 01:17:56.227800 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-09 01:17:56.227810 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-09 01:17:56.227821 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-09 01:17:56.227833 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-09 01:17:56.227844 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:17:56.227856 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-09 01:17:56.227868 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:17:56.227875 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-09 01:17:56.227885 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:17:56.227896 | orchestrator | 2026-03-09 01:17:56.227910 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-03-09 01:17:56.227924 | orchestrator | Monday 09 March 2026 01:13:34 +0000 (0:00:03.727) 0:05:22.134 ********** 2026-03-09 01:17:56.227935 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.227945 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.227956 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.227966 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-5, testbed-node-4 2026-03-09 01:17:56.227977 | orchestrator | 2026-03-09 01:17:56.227988 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-03-09 01:17:56.227999 | orchestrator | Monday 09 March 2026 01:13:37 +0000 (0:00:02.332) 0:05:24.466 ********** 2026-03-09 01:17:56.228010 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-09 01:17:56.228020 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-09 01:17:56.228031 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-09 01:17:56.228042 | orchestrator | 2026-03-09 01:17:56.228053 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-03-09 01:17:56.228065 | orchestrator | Monday 09 March 2026 01:13:38 +0000 (0:00:01.609) 0:05:26.076 ********** 2026-03-09 01:17:56.228076 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:56.228086 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:56.228093 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:56.228100 | orchestrator | 2026-03-09 01:17:56.228107 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-09 01:17:56.228114 | orchestrator | Monday 09 March 2026 01:13:39 +0000 (0:00:00.364) 0:05:26.441 ********** 2026-03-09 01:17:56.228121 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:56.228127 | orchestrator | 2026-03-09 01:17:56.228143 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-09 01:17:56.228150 | orchestrator | Monday 09 March 2026 01:13:39 +0000 (0:00:00.145) 0:05:26.586 ********** 2026-03-09 01:17:56.228157 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:56.228163 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:56.228170 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:56.228177 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.228183 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.228190 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.228196 | orchestrator | 2026-03-09 01:17:56.228203 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-09 01:17:56.228210 | orchestrator | Monday 09 March 2026 01:13:40 +0000 (0:00:00.709) 0:05:27.296 ********** 2026-03-09 01:17:56.228217 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-09 01:17:56.228223 | orchestrator | 2026-03-09 01:17:56.228230 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-09 01:17:56.228237 | orchestrator | Monday 09 March 2026 01:13:41 +0000 (0:00:01.293) 0:05:28.589 ********** 2026-03-09 01:17:56.228244 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:56.228250 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:56.228257 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:56.228263 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.228270 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.228277 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.228283 | orchestrator | 2026-03-09 01:17:56.228290 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-09 01:17:56.228297 | orchestrator | Monday 09 March 2026 01:13:42 +0000 (0:00:00.802) 0:05:29.392 ********** 2026-03-09 01:17:56.228318 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:17:56.228328 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:17:56.228336 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:17:56.228347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:56.228355 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:17:56.228362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:56.228376 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:17:56.228408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:56.228422 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:17:56.228446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.228461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.228472 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.228490 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.228507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.228519 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.228538 | orchestrator | 2026-03-09 01:17:56.228549 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-09 01:17:56.228559 | orchestrator | Monday 09 March 2026 01:13:46 +0000 (0:00:04.528) 0:05:33.920 ********** 2026-03-09 01:17:56.228570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:17:56.228581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:17:56.228592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:17:56.228615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:17:56.228628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:17:56.228647 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:17:56.228660 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.228667 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.228678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:56.228690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:56.228697 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.228709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:56.228716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.228723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.228730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.228737 | orchestrator | 2026-03-09 01:17:56.228744 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-09 01:17:56.228750 | orchestrator | Monday 09 March 2026 01:13:54 +0000 (0:00:08.118) 0:05:42.039 ********** 2026-03-09 01:17:56.228757 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:56.228764 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:56.228772 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:56.228783 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.228800 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.228811 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.228822 | orchestrator | 2026-03-09 01:17:56.228833 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-09 01:17:56.228843 | orchestrator | Monday 09 March 2026 01:13:56 +0000 (0:00:02.202) 0:05:44.242 ********** 2026-03-09 01:17:56.228854 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-09 01:17:56.228871 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-09 01:17:56.228884 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-09 01:17:56.228903 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-09 01:17:56.228914 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-09 01:17:56.228924 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-09 01:17:56.228931 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-09 01:17:56.228938 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.228945 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-09 01:17:56.228951 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.228958 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-09 01:17:56.228965 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.228971 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-09 01:17:56.228978 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-09 01:17:56.228985 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-09 01:17:56.228992 | orchestrator | 2026-03-09 01:17:56.228998 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-09 01:17:56.229005 | orchestrator | Monday 09 March 2026 01:14:01 +0000 (0:00:04.481) 0:05:48.724 ********** 2026-03-09 01:17:56.229012 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:56.229018 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:56.229025 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:56.229031 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.229038 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.229045 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.229051 | orchestrator | 2026-03-09 01:17:56.229058 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-09 01:17:56.229065 | orchestrator | Monday 09 March 2026 01:14:02 +0000 (0:00:00.704) 0:05:49.428 ********** 2026-03-09 01:17:56.229072 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-09 01:17:56.229078 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-09 01:17:56.229085 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-09 01:17:56.229092 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-09 01:17:56.229098 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-09 01:17:56.229105 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-09 01:17:56.229112 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-09 01:17:56.229118 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-09 01:17:56.229125 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-09 01:17:56.229131 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-09 01:17:56.229138 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.229145 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-09 01:17:56.229151 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.229158 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-09 01:17:56.229169 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.229177 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-09 01:17:56.229183 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-09 01:17:56.229190 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-09 01:17:56.229197 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-09 01:17:56.229208 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-09 01:17:56.229215 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-09 01:17:56.229222 | orchestrator | 2026-03-09 01:17:56.229228 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-09 01:17:56.229235 | orchestrator | Monday 09 March 2026 01:14:08 +0000 (0:00:06.361) 0:05:55.789 ********** 2026-03-09 01:17:56.229248 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-09 01:17:56.229255 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-09 01:17:56.229262 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-09 01:17:56.229269 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-09 01:17:56.229275 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-09 01:17:56.229282 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-09 01:17:56.229289 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-09 01:17:56.229295 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-09 01:17:56.229302 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-09 01:17:56.229310 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-09 01:17:56.229321 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-09 01:17:56.229335 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-09 01:17:56.229352 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-09 01:17:56.229362 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.229372 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-09 01:17:56.229403 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.229415 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-09 01:17:56.229426 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.229436 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-09 01:17:56.229445 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-09 01:17:56.229456 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-09 01:17:56.229467 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-09 01:17:56.229476 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-09 01:17:56.229488 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-09 01:17:56.229498 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-09 01:17:56.229532 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-09 01:17:56.229544 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-09 01:17:56.229552 | orchestrator | 2026-03-09 01:17:56.229558 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-09 01:17:56.229565 | orchestrator | Monday 09 March 2026 01:14:16 +0000 (0:00:07.557) 0:06:03.346 ********** 2026-03-09 01:17:56.229571 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:56.229578 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:56.229585 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:56.229591 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.229598 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.229605 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.229611 | orchestrator | 2026-03-09 01:17:56.229618 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-09 01:17:56.229625 | orchestrator | Monday 09 March 2026 01:14:16 +0000 (0:00:00.752) 0:06:04.099 ********** 2026-03-09 01:17:56.229631 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:56.229638 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:56.229645 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:56.229651 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.229658 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.229665 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.229671 | orchestrator | 2026-03-09 01:17:56.229678 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-09 01:17:56.229685 | orchestrator | Monday 09 March 2026 01:14:17 +0000 (0:00:00.629) 0:06:04.728 ********** 2026-03-09 01:17:56.229691 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.229698 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.229704 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.229711 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:17:56.229717 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:17:56.229724 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:17:56.229731 | orchestrator | 2026-03-09 01:17:56.229737 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-09 01:17:56.229744 | orchestrator | Monday 09 March 2026 01:14:19 +0000 (0:00:02.189) 0:06:06.917 ********** 2026-03-09 01:17:56.229764 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:17:56.229773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:17:56.229784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:17:56.229791 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:17:56.229798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.229805 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:56.229820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.229827 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:56.229838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-09 01:17:56.229850 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-09 01:17:56.229857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.229864 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:56.229871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:17:56.229878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.229885 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.229984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:17:56.230010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.230056 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.230077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-09 01:17:56.230090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-09 01:17:56.230102 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.230113 | orchestrator | 2026-03-09 01:17:56.230124 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-09 01:17:56.230134 | orchestrator | Monday 09 March 2026 01:14:21 +0000 (0:00:01.980) 0:06:08.897 ********** 2026-03-09 01:17:56.230142 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-09 01:17:56.230148 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-09 01:17:56.230155 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:56.230162 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-09 01:17:56.230169 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-09 01:17:56.230175 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:56.230182 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-09 01:17:56.230189 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-09 01:17:56.230195 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:56.230202 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-09 01:17:56.230209 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-09 01:17:56.230215 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.230222 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-09 01:17:56.230229 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-09 01:17:56.230235 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.230242 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-09 01:17:56.230249 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-09 01:17:56.230255 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.230262 | orchestrator | 2026-03-09 01:17:56.230269 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-09 01:17:56.230275 | orchestrator | Monday 09 March 2026 01:14:22 +0000 (0:00:01.041) 0:06:09.938 ********** 2026-03-09 01:17:56.230290 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:17:56.230307 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:17:56.230315 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-09 01:17:56.230322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:56.230330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:56.230337 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:17:56.230348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-09 01:17:56.230364 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:17:56.230372 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-09 01:17:56.230379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.230407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.230416 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.230426 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.230443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.230450 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-09 01:17:56.230457 | orchestrator | 2026-03-09 01:17:56.230464 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-09 01:17:56.230471 | orchestrator | Monday 09 March 2026 01:14:25 +0000 (0:00:03.079) 0:06:13.018 ********** 2026-03-09 01:17:56.230478 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:56.230485 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:56.230492 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:56.230499 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.230505 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.230512 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.230519 | orchestrator | 2026-03-09 01:17:56.230525 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-09 01:17:56.230532 | orchestrator | Monday 09 March 2026 01:14:26 +0000 (0:00:01.043) 0:06:14.061 ********** 2026-03-09 01:17:56.230539 | orchestrator | 2026-03-09 01:17:56.230546 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-09 01:17:56.230553 | orchestrator | Monday 09 March 2026 01:14:26 +0000 (0:00:00.167) 0:06:14.229 ********** 2026-03-09 01:17:56.230559 | orchestrator | 2026-03-09 01:17:56.230566 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-09 01:17:56.230573 | orchestrator | Monday 09 March 2026 01:14:27 +0000 (0:00:00.219) 0:06:14.449 ********** 2026-03-09 01:17:56.230579 | orchestrator | 2026-03-09 01:17:56.230586 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-09 01:17:56.230593 | orchestrator | Monday 09 March 2026 01:14:27 +0000 (0:00:00.174) 0:06:14.623 ********** 2026-03-09 01:17:56.230600 | orchestrator | 2026-03-09 01:17:56.230607 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-09 01:17:56.230613 | orchestrator | Monday 09 March 2026 01:14:27 +0000 (0:00:00.162) 0:06:14.785 ********** 2026-03-09 01:17:56.230620 | orchestrator | 2026-03-09 01:17:56.230626 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-09 01:17:56.230633 | orchestrator | Monday 09 March 2026 01:14:27 +0000 (0:00:00.389) 0:06:15.175 ********** 2026-03-09 01:17:56.230640 | orchestrator | 2026-03-09 01:17:56.230647 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-09 01:17:56.230653 | orchestrator | Monday 09 March 2026 01:14:28 +0000 (0:00:00.212) 0:06:15.388 ********** 2026-03-09 01:17:56.230665 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:17:56.230672 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:56.230679 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:17:56.230686 | orchestrator | 2026-03-09 01:17:56.230693 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-09 01:17:56.230700 | orchestrator | Monday 09 March 2026 01:14:40 +0000 (0:00:12.693) 0:06:28.081 ********** 2026-03-09 01:17:56.230706 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:56.230713 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:17:56.230720 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:17:56.230727 | orchestrator | 2026-03-09 01:17:56.230733 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-09 01:17:56.230740 | orchestrator | Monday 09 March 2026 01:14:54 +0000 (0:00:14.111) 0:06:42.192 ********** 2026-03-09 01:17:56.230747 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:17:56.230754 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:17:56.230760 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:17:56.230767 | orchestrator | 2026-03-09 01:17:56.230774 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-09 01:17:56.230781 | orchestrator | Monday 09 March 2026 01:15:21 +0000 (0:00:26.323) 0:07:08.515 ********** 2026-03-09 01:17:56.230787 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:17:56.230794 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:17:56.230801 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:17:56.230807 | orchestrator | 2026-03-09 01:17:56.230814 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-09 01:17:56.230821 | orchestrator | Monday 09 March 2026 01:16:01 +0000 (0:00:40.579) 0:07:49.095 ********** 2026-03-09 01:17:56.230831 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:17:56.230838 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:17:56.230845 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:17:56.230852 | orchestrator | 2026-03-09 01:17:56.230858 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-09 01:17:56.230865 | orchestrator | Monday 09 March 2026 01:16:02 +0000 (0:00:00.953) 0:07:50.049 ********** 2026-03-09 01:17:56.230872 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:17:56.230879 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:17:56.230886 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:17:56.230892 | orchestrator | 2026-03-09 01:17:56.230906 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-09 01:17:56.230913 | orchestrator | Monday 09 March 2026 01:16:03 +0000 (0:00:00.886) 0:07:50.935 ********** 2026-03-09 01:17:56.230920 | orchestrator | changed: [testbed-node-3] 2026-03-09 01:17:56.230926 | orchestrator | changed: [testbed-node-4] 2026-03-09 01:17:56.230933 | orchestrator | changed: [testbed-node-5] 2026-03-09 01:17:56.230940 | orchestrator | 2026-03-09 01:17:56.230947 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-09 01:17:56.230953 | orchestrator | Monday 09 March 2026 01:16:34 +0000 (0:00:31.001) 0:08:21.937 ********** 2026-03-09 01:17:56.230960 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:56.230967 | orchestrator | 2026-03-09 01:17:56.230973 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-09 01:17:56.230980 | orchestrator | Monday 09 March 2026 01:16:35 +0000 (0:00:00.428) 0:08:22.365 ********** 2026-03-09 01:17:56.230987 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:56.230993 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.231000 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:56.231007 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.231013 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.231020 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-09 01:17:56.231027 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-09 01:17:56.231034 | orchestrator | 2026-03-09 01:17:56.231046 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-09 01:17:56.231053 | orchestrator | Monday 09 March 2026 01:16:58 +0000 (0:00:23.056) 0:08:45.421 ********** 2026-03-09 01:17:56.231060 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.231067 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.231073 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:56.231080 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.231086 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:56.231093 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:56.231100 | orchestrator | 2026-03-09 01:17:56.231107 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-09 01:17:56.231113 | orchestrator | Monday 09 March 2026 01:17:10 +0000 (0:00:12.260) 0:08:57.682 ********** 2026-03-09 01:17:56.231120 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:56.231126 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:56.231133 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.231140 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.231146 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.231153 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-03-09 01:17:56.231160 | orchestrator | 2026-03-09 01:17:56.231166 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-09 01:17:56.231173 | orchestrator | Monday 09 March 2026 01:17:16 +0000 (0:00:05.835) 0:09:03.517 ********** 2026-03-09 01:17:56.231180 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-09 01:17:56.231187 | orchestrator | 2026-03-09 01:17:56.231194 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-09 01:17:56.231205 | orchestrator | Monday 09 March 2026 01:17:30 +0000 (0:00:14.662) 0:09:18.179 ********** 2026-03-09 01:17:56.231212 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-09 01:17:56.231218 | orchestrator | 2026-03-09 01:17:56.231225 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-09 01:17:56.231232 | orchestrator | Monday 09 March 2026 01:17:32 +0000 (0:00:01.773) 0:09:19.952 ********** 2026-03-09 01:17:56.231239 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:56.231245 | orchestrator | 2026-03-09 01:17:56.231252 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-09 01:17:56.231259 | orchestrator | Monday 09 March 2026 01:17:34 +0000 (0:00:01.652) 0:09:21.605 ********** 2026-03-09 01:17:56.231266 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-09 01:17:56.231273 | orchestrator | 2026-03-09 01:17:56.231279 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-09 01:17:56.231286 | orchestrator | Monday 09 March 2026 01:17:47 +0000 (0:00:13.033) 0:09:34.638 ********** 2026-03-09 01:17:56.231293 | orchestrator | ok: [testbed-node-3] 2026-03-09 01:17:56.231300 | orchestrator | ok: [testbed-node-4] 2026-03-09 01:17:56.231306 | orchestrator | ok: [testbed-node-5] 2026-03-09 01:17:56.231313 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:17:56.231320 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:17:56.231326 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:17:56.231333 | orchestrator | 2026-03-09 01:17:56.231340 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-09 01:17:56.231346 | orchestrator | 2026-03-09 01:17:56.231353 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-09 01:17:56.231360 | orchestrator | Monday 09 March 2026 01:17:49 +0000 (0:00:02.107) 0:09:36.745 ********** 2026-03-09 01:17:56.231367 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:17:56.231373 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:17:56.231380 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:17:56.231439 | orchestrator | 2026-03-09 01:17:56.231446 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-09 01:17:56.231453 | orchestrator | 2026-03-09 01:17:56.231459 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-09 01:17:56.231472 | orchestrator | Monday 09 March 2026 01:17:50 +0000 (0:00:01.476) 0:09:38.222 ********** 2026-03-09 01:17:56.231484 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.231491 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.231498 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.231505 | orchestrator | 2026-03-09 01:17:56.231512 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-09 01:17:56.231518 | orchestrator | 2026-03-09 01:17:56.231525 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-09 01:17:56.231532 | orchestrator | Monday 09 March 2026 01:17:51 +0000 (0:00:00.586) 0:09:38.808 ********** 2026-03-09 01:17:56.231543 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-09 01:17:56.231550 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-09 01:17:56.231557 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-09 01:17:56.231564 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-09 01:17:56.231570 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-09 01:17:56.231577 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-09 01:17:56.231584 | orchestrator | skipping: [testbed-node-3] 2026-03-09 01:17:56.231591 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-09 01:17:56.231598 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-09 01:17:56.231604 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-09 01:17:56.231611 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-09 01:17:56.231618 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-09 01:17:56.231624 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-09 01:17:56.231631 | orchestrator | skipping: [testbed-node-4] 2026-03-09 01:17:56.231638 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-09 01:17:56.231645 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-09 01:17:56.231651 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-09 01:17:56.231658 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-09 01:17:56.231665 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-09 01:17:56.231671 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-09 01:17:56.231678 | orchestrator | skipping: [testbed-node-5] 2026-03-09 01:17:56.231685 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-09 01:17:56.231691 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-09 01:17:56.231698 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-09 01:17:56.231705 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-09 01:17:56.231711 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-09 01:17:56.231718 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-09 01:17:56.231725 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.231731 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-09 01:17:56.231738 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-09 01:17:56.231745 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-09 01:17:56.231751 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-09 01:17:56.231758 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-09 01:17:56.231764 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-09 01:17:56.231771 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.231778 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-09 01:17:56.231790 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-09 01:17:56.231813 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-09 01:17:56.231827 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-09 01:17:56.231839 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-09 01:17:56.231849 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-09 01:17:56.231859 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.231870 | orchestrator | 2026-03-09 01:17:56.231879 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-09 01:17:56.231892 | orchestrator | 2026-03-09 01:17:56.231903 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-09 01:17:56.231914 | orchestrator | Monday 09 March 2026 01:17:53 +0000 (0:00:01.642) 0:09:40.450 ********** 2026-03-09 01:17:56.231926 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-09 01:17:56.231937 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-09 01:17:56.231948 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.231959 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-09 01:17:56.231969 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-09 01:17:56.231976 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.231982 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-09 01:17:56.231988 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-09 01:17:56.231994 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.232001 | orchestrator | 2026-03-09 01:17:56.232007 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-09 01:17:56.232013 | orchestrator | 2026-03-09 01:17:56.232019 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-09 01:17:56.232025 | orchestrator | Monday 09 March 2026 01:17:54 +0000 (0:00:00.850) 0:09:41.301 ********** 2026-03-09 01:17:56.232032 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.232060 | orchestrator | 2026-03-09 01:17:56.232074 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-09 01:17:56.232081 | orchestrator | 2026-03-09 01:17:56.232092 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-09 01:17:56.232099 | orchestrator | Monday 09 March 2026 01:17:54 +0000 (0:00:00.749) 0:09:42.051 ********** 2026-03-09 01:17:56.232105 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:17:56.232111 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:17:56.232117 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:17:56.232124 | orchestrator | 2026-03-09 01:17:56.232130 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:17:56.232141 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:17:56.232149 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2026-03-09 01:17:56.232156 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=52  rescued=0 ignored=0 2026-03-09 01:17:56.232162 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=52  rescued=0 ignored=0 2026-03-09 01:17:56.232168 | orchestrator | testbed-node-3 : ok=45  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-09 01:17:56.232175 | orchestrator | testbed-node-4 : ok=39  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-09 01:17:56.232181 | orchestrator | testbed-node-5 : ok=39  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-09 01:17:56.232193 | orchestrator | 2026-03-09 01:17:56.232199 | orchestrator | 2026-03-09 01:17:56.232205 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:17:56.232212 | orchestrator | Monday 09 March 2026 01:17:55 +0000 (0:00:00.791) 0:09:42.842 ********** 2026-03-09 01:17:56.232218 | orchestrator | =============================================================================== 2026-03-09 01:17:56.232224 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 40.58s 2026-03-09 01:17:56.232230 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 36.23s 2026-03-09 01:17:56.232236 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 31.00s 2026-03-09 01:17:56.232242 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 26.32s 2026-03-09 01:17:56.232249 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 24.28s 2026-03-09 01:17:56.232255 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.06s 2026-03-09 01:17:56.232261 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 19.03s 2026-03-09 01:17:56.232267 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.66s 2026-03-09 01:17:56.232273 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 17.48s 2026-03-09 01:17:56.232279 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.83s 2026-03-09 01:17:56.232285 | orchestrator | nova-cell : Create cell ------------------------------------------------ 14.73s 2026-03-09 01:17:56.232291 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.66s 2026-03-09 01:17:56.232297 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 14.11s 2026-03-09 01:17:56.232304 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.03s 2026-03-09 01:17:56.232310 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.86s 2026-03-09 01:17:56.232316 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.69s 2026-03-09 01:17:56.232322 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 12.26s 2026-03-09 01:17:56.232328 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.95s 2026-03-09 01:17:56.232334 | orchestrator | nova : Copying over nova.conf for nova-api-bootstrap -------------------- 8.69s 2026-03-09 01:17:56.232340 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.19s 2026-03-09 01:17:56.232355 | orchestrator | 2026-03-09 01:17:56 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:17:56.232362 | orchestrator | 2026-03-09 01:17:56 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:17:59.268772 | orchestrator | 2026-03-09 01:17:59 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:17:59.268872 | orchestrator | 2026-03-09 01:17:59 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:02.314082 | orchestrator | 2026-03-09 01:18:02 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:18:02.314155 | orchestrator | 2026-03-09 01:18:02 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:05.353932 | orchestrator | 2026-03-09 01:18:05 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:18:05.354083 | orchestrator | 2026-03-09 01:18:05 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:08.405946 | orchestrator | 2026-03-09 01:18:08 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:18:08.406071 | orchestrator | 2026-03-09 01:18:08 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:11.451662 | orchestrator | 2026-03-09 01:18:11 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state STARTED 2026-03-09 01:18:11.451758 | orchestrator | 2026-03-09 01:18:11 | INFO  | Wait 1 second(s) until the next check 2026-03-09 01:18:14.493867 | orchestrator | 2026-03-09 01:18:14 | INFO  | Task 27ec5f28-6c13-4ecb-a8fa-08c3ccdfcf06 is in state SUCCESS 2026-03-09 01:18:14.496797 | orchestrator | 2026-03-09 01:18:14.496871 | orchestrator | 2026-03-09 01:18:14.496898 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-09 01:18:14.496919 | orchestrator | 2026-03-09 01:18:14.496938 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-09 01:18:14.496958 | orchestrator | Monday 09 March 2026 01:12:57 +0000 (0:00:00.331) 0:00:00.331 ********** 2026-03-09 01:18:14.496976 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:18:14.496993 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:18:14.497009 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:18:14.497026 | orchestrator | 2026-03-09 01:18:14.497044 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-09 01:18:14.497062 | orchestrator | Monday 09 March 2026 01:12:57 +0000 (0:00:00.374) 0:00:00.706 ********** 2026-03-09 01:18:14.497078 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-09 01:18:14.497095 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-09 01:18:14.497112 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-09 01:18:14.497129 | orchestrator | 2026-03-09 01:18:14.497145 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-09 01:18:14.497161 | orchestrator | 2026-03-09 01:18:14.497178 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-09 01:18:14.497195 | orchestrator | Monday 09 March 2026 01:12:58 +0000 (0:00:00.621) 0:00:01.328 ********** 2026-03-09 01:18:14.497210 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:18:14.497226 | orchestrator | 2026-03-09 01:18:14.497243 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-09 01:18:14.497260 | orchestrator | Monday 09 March 2026 01:12:59 +0000 (0:00:00.627) 0:00:01.956 ********** 2026-03-09 01:18:14.497277 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-09 01:18:14.497294 | orchestrator | 2026-03-09 01:18:14.497310 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-09 01:18:14.497327 | orchestrator | Monday 09 March 2026 01:13:03 +0000 (0:00:04.126) 0:00:06.083 ********** 2026-03-09 01:18:14.497345 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-09 01:18:14.497364 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-09 01:18:14.497403 | orchestrator | 2026-03-09 01:18:14.497422 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-09 01:18:14.497441 | orchestrator | Monday 09 March 2026 01:13:10 +0000 (0:00:07.570) 0:00:13.653 ********** 2026-03-09 01:18:14.497459 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-09 01:18:14.497472 | orchestrator | 2026-03-09 01:18:14.497483 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-09 01:18:14.497496 | orchestrator | Monday 09 March 2026 01:13:14 +0000 (0:00:03.867) 0:00:17.520 ********** 2026-03-09 01:18:14.497513 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-09 01:18:14.497539 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-09 01:18:14.497557 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-09 01:18:14.497574 | orchestrator | 2026-03-09 01:18:14.497590 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-09 01:18:14.497606 | orchestrator | Monday 09 March 2026 01:13:23 +0000 (0:00:09.177) 0:00:26.698 ********** 2026-03-09 01:18:14.497621 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-09 01:18:14.497639 | orchestrator | 2026-03-09 01:18:14.497655 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-09 01:18:14.497697 | orchestrator | Monday 09 March 2026 01:13:27 +0000 (0:00:03.961) 0:00:30.659 ********** 2026-03-09 01:18:14.497715 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-09 01:18:14.497725 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-09 01:18:14.497735 | orchestrator | 2026-03-09 01:18:14.497745 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-09 01:18:14.497754 | orchestrator | Monday 09 March 2026 01:13:36 +0000 (0:00:08.611) 0:00:39.271 ********** 2026-03-09 01:18:14.497764 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-09 01:18:14.497774 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-09 01:18:14.497783 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-09 01:18:14.497793 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-09 01:18:14.497803 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-09 01:18:14.497812 | orchestrator | 2026-03-09 01:18:14.497822 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-09 01:18:14.497832 | orchestrator | Monday 09 March 2026 01:13:54 +0000 (0:00:17.878) 0:00:57.149 ********** 2026-03-09 01:18:14.497841 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:18:14.497851 | orchestrator | 2026-03-09 01:18:14.497861 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-09 01:18:14.497871 | orchestrator | Monday 09 March 2026 01:13:55 +0000 (0:00:00.647) 0:00:57.796 ********** 2026-03-09 01:18:14.497880 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:18:14.497890 | orchestrator | 2026-03-09 01:18:14.497900 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-09 01:18:14.497922 | orchestrator | Monday 09 March 2026 01:14:01 +0000 (0:00:06.264) 0:01:04.060 ********** 2026-03-09 01:18:14.497932 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:18:14.497942 | orchestrator | 2026-03-09 01:18:14.497952 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-09 01:18:14.498066 | orchestrator | Monday 09 March 2026 01:14:06 +0000 (0:00:04.967) 0:01:09.028 ********** 2026-03-09 01:18:14.498083 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:18:14.498093 | orchestrator | 2026-03-09 01:18:14.498103 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-09 01:18:14.498113 | orchestrator | Monday 09 March 2026 01:14:09 +0000 (0:00:03.673) 0:01:12.702 ********** 2026-03-09 01:18:14.498122 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-09 01:18:14.498132 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-09 01:18:14.498142 | orchestrator | 2026-03-09 01:18:14.498151 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-09 01:18:14.498161 | orchestrator | Monday 09 March 2026 01:14:21 +0000 (0:00:11.339) 0:01:24.041 ********** 2026-03-09 01:18:14.498171 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-09 01:18:14.498181 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-09 01:18:14.498192 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-09 01:18:14.498203 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-09 01:18:14.498213 | orchestrator | 2026-03-09 01:18:14.498223 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-09 01:18:14.498233 | orchestrator | Monday 09 March 2026 01:14:37 +0000 (0:00:16.642) 0:01:40.684 ********** 2026-03-09 01:18:14.498251 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:18:14.498261 | orchestrator | 2026-03-09 01:18:14.498270 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-09 01:18:14.498280 | orchestrator | Monday 09 March 2026 01:14:42 +0000 (0:00:04.892) 0:01:45.576 ********** 2026-03-09 01:18:14.498290 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:18:14.498300 | orchestrator | 2026-03-09 01:18:14.498309 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-09 01:18:14.498319 | orchestrator | Monday 09 March 2026 01:14:48 +0000 (0:00:05.686) 0:01:51.263 ********** 2026-03-09 01:18:14.498329 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:18:14.498339 | orchestrator | 2026-03-09 01:18:14.498349 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-09 01:18:14.498359 | orchestrator | Monday 09 March 2026 01:14:48 +0000 (0:00:00.266) 0:01:51.529 ********** 2026-03-09 01:18:14.498369 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:18:14.498378 | orchestrator | 2026-03-09 01:18:14.498451 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-09 01:18:14.498462 | orchestrator | Monday 09 March 2026 01:14:52 +0000 (0:00:04.203) 0:01:55.733 ********** 2026-03-09 01:18:14.498472 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:18:14.498482 | orchestrator | 2026-03-09 01:18:14.498492 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-09 01:18:14.498502 | orchestrator | Monday 09 March 2026 01:14:54 +0000 (0:00:01.322) 0:01:57.056 ********** 2026-03-09 01:18:14.498511 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:18:14.498521 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:18:14.498531 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:18:14.498541 | orchestrator | 2026-03-09 01:18:14.498551 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-09 01:18:14.498560 | orchestrator | Monday 09 March 2026 01:15:00 +0000 (0:00:06.137) 0:02:03.193 ********** 2026-03-09 01:18:14.498570 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:18:14.498580 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:18:14.498589 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:18:14.498599 | orchestrator | 2026-03-09 01:18:14.498609 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-09 01:18:14.498619 | orchestrator | Monday 09 March 2026 01:15:05 +0000 (0:00:05.005) 0:02:08.199 ********** 2026-03-09 01:18:14.498629 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:18:14.498638 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:18:14.498648 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:18:14.498657 | orchestrator | 2026-03-09 01:18:14.498667 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-09 01:18:14.498677 | orchestrator | Monday 09 March 2026 01:15:07 +0000 (0:00:01.746) 0:02:09.945 ********** 2026-03-09 01:18:14.498687 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:18:14.498697 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:18:14.498707 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:18:14.498716 | orchestrator | 2026-03-09 01:18:14.498726 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-09 01:18:14.498736 | orchestrator | Monday 09 March 2026 01:15:09 +0000 (0:00:02.107) 0:02:12.053 ********** 2026-03-09 01:18:14.498746 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:18:14.498756 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:18:14.498765 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:18:14.498775 | orchestrator | 2026-03-09 01:18:14.498784 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-09 01:18:14.498794 | orchestrator | Monday 09 March 2026 01:15:10 +0000 (0:00:01.454) 0:02:13.508 ********** 2026-03-09 01:18:14.498804 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:18:14.498814 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:18:14.498823 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:18:14.498840 | orchestrator | 2026-03-09 01:18:14.498850 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-09 01:18:14.498866 | orchestrator | Monday 09 March 2026 01:15:11 +0000 (0:00:01.220) 0:02:14.728 ********** 2026-03-09 01:18:14.498876 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:18:14.498886 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:18:14.498896 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:18:14.498905 | orchestrator | 2026-03-09 01:18:14.499066 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-09 01:18:14.499096 | orchestrator | Monday 09 March 2026 01:15:14 +0000 (0:00:02.447) 0:02:17.176 ********** 2026-03-09 01:18:14.499113 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:18:14.499129 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:18:14.499145 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:18:14.499162 | orchestrator | 2026-03-09 01:18:14.499179 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-09 01:18:14.499196 | orchestrator | Monday 09 March 2026 01:15:16 +0000 (0:00:01.728) 0:02:18.904 ********** 2026-03-09 01:18:14.499213 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:18:14.499230 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:18:14.499246 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:18:14.499263 | orchestrator | 2026-03-09 01:18:14.499279 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-09 01:18:14.499295 | orchestrator | Monday 09 March 2026 01:15:16 +0000 (0:00:00.668) 0:02:19.572 ********** 2026-03-09 01:18:14.499311 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:18:14.499327 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:18:14.499342 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:18:14.499358 | orchestrator | 2026-03-09 01:18:14.499375 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-09 01:18:14.499413 | orchestrator | Monday 09 March 2026 01:15:19 +0000 (0:00:02.901) 0:02:22.474 ********** 2026-03-09 01:18:14.499431 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:18:14.499447 | orchestrator | 2026-03-09 01:18:14.499463 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-09 01:18:14.499480 | orchestrator | Monday 09 March 2026 01:15:20 +0000 (0:00:00.821) 0:02:23.296 ********** 2026-03-09 01:18:14.499498 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:18:14.499514 | orchestrator | 2026-03-09 01:18:14.499530 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-09 01:18:14.499547 | orchestrator | Monday 09 March 2026 01:15:25 +0000 (0:00:04.702) 0:02:27.998 ********** 2026-03-09 01:18:14.499563 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:18:14.499580 | orchestrator | 2026-03-09 01:18:14.499596 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-09 01:18:14.499613 | orchestrator | Monday 09 March 2026 01:15:28 +0000 (0:00:03.614) 0:02:31.613 ********** 2026-03-09 01:18:14.499629 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-09 01:18:14.499646 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-09 01:18:14.499662 | orchestrator | 2026-03-09 01:18:14.499679 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-09 01:18:14.499695 | orchestrator | Monday 09 March 2026 01:15:36 +0000 (0:00:07.592) 0:02:39.206 ********** 2026-03-09 01:18:14.499711 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:18:14.499726 | orchestrator | 2026-03-09 01:18:14.499744 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-09 01:18:14.499760 | orchestrator | Monday 09 March 2026 01:15:40 +0000 (0:00:04.331) 0:02:43.537 ********** 2026-03-09 01:18:14.499776 | orchestrator | ok: [testbed-node-0] 2026-03-09 01:18:14.499792 | orchestrator | ok: [testbed-node-1] 2026-03-09 01:18:14.499808 | orchestrator | ok: [testbed-node-2] 2026-03-09 01:18:14.499825 | orchestrator | 2026-03-09 01:18:14.499841 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-09 01:18:14.499872 | orchestrator | Monday 09 March 2026 01:15:41 +0000 (0:00:00.366) 0:02:43.903 ********** 2026-03-09 01:18:14.499893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:18:14.499979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:18:14.500003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:18:14.500023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:18:14.500041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:18:14.500058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:18:14.500088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.500107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.500180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.500280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.500301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.500319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.500348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:18:14.500366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:18:14.500406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:18:14.500425 | orchestrator | 2026-03-09 01:18:14.500449 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-09 01:18:14.500467 | orchestrator | Monday 09 March 2026 01:15:43 +0000 (0:00:02.545) 0:02:46.448 ********** 2026-03-09 01:18:14.500484 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:18:14.500500 | orchestrator | 2026-03-09 01:18:14.500570 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-09 01:18:14.500590 | orchestrator | Monday 09 March 2026 01:15:43 +0000 (0:00:00.162) 0:02:46.611 ********** 2026-03-09 01:18:14.500608 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:18:14.500626 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:18:14.500642 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:18:14.500659 | orchestrator | 2026-03-09 01:18:14.500676 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-09 01:18:14.500693 | orchestrator | Monday 09 March 2026 01:15:44 +0000 (0:00:00.610) 0:02:47.221 ********** 2026-03-09 01:18:14.500711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:18:14.500730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:18:14.500760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:18:14.500779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:18:14.500797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:18:14.500815 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:18:14.500884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:18:14.500908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:18:14.500926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:18:14.500963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:18:14.500980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:18:14.500998 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:18:14.501015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:18:14.501090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:18:14.501112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:18:14.501128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:18:14.501154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:18:14.501170 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:18:14.501187 | orchestrator | 2026-03-09 01:18:14.501203 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-09 01:18:14.501221 | orchestrator | Monday 09 March 2026 01:15:45 +0000 (0:00:00.831) 0:02:48.053 ********** 2026-03-09 01:18:14.501238 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-09 01:18:14.501255 | orchestrator | 2026-03-09 01:18:14.501272 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-09 01:18:14.501289 | orchestrator | Monday 09 March 2026 01:15:45 +0000 (0:00:00.645) 0:02:48.698 ********** 2026-03-09 01:18:14.501307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:18:14.501379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:18:14.501436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:18:14.501465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:18:14.501482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:18:14.501499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:18:14.501516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.501535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.501561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.501621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.501642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.501659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.501677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:18:14.501695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:18:14.501731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:18:14.501751 | orchestrator | 2026-03-09 01:18:14.501767 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-09 01:18:14.501784 | orchestrator | Monday 09 March 2026 01:15:51 +0000 (0:00:05.939) 0:02:54.638 ********** 2026-03-09 01:18:14.501811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:18:14.501829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:18:14.501847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:18:14.501865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:18:14.501882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:18:14.501900 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:18:14.501934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:18:14.501961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:18:14.501979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:18:14.501996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:18:14.502067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:18:14.502090 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:18:14.502109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:18:14.502135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:18:14.502176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:18:14.502197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:18:14.502214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:18:14.502232 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:18:14.502251 | orchestrator | 2026-03-09 01:18:14.502270 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-09 01:18:14.502281 | orchestrator | Monday 09 March 2026 01:15:52 +0000 (0:00:00.921) 0:02:55.559 ********** 2026-03-09 01:18:14.502291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:18:14.502301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:18:14.502316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:18:14.502339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:18:14.502349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:18:14.502359 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:18:14.502369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:18:14.502379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:18:14.502440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:18:14.502455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:18:14.502484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:18:14.502495 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:18:14.502505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-09 01:18:14.502516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-09 01:18:14.502526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-09 01:18:14.502536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-09 01:18:14.502546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-09 01:18:14.502563 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:18:14.502579 | orchestrator | 2026-03-09 01:18:14.502593 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-09 01:18:14.502603 | orchestrator | Monday 09 March 2026 01:15:53 +0000 (0:00:01.220) 0:02:56.780 ********** 2026-03-09 01:18:14.502624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:18:14.502636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:18:14.502644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:18:14.502653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:18:14.502666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:18:14.502677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:18:14.502692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.502700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.502708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.502717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.502725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.502738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.502755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:18:14.502764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:18:14.502772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:18:14.502780 | orchestrator | 2026-03-09 01:18:14.502789 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-09 01:18:14.502797 | orchestrator | Monday 09 March 2026 01:15:58 +0000 (0:00:04.756) 0:03:01.537 ********** 2026-03-09 01:18:14.502805 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-09 01:18:14.502814 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-09 01:18:14.502822 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-09 01:18:14.502830 | orchestrator | 2026-03-09 01:18:14.502838 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-09 01:18:14.502846 | orchestrator | Monday 09 March 2026 01:16:00 +0000 (0:00:02.081) 0:03:03.618 ********** 2026-03-09 01:18:14.502854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:18:14.502868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:18:14.502885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:18:14.502894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:18:14.502902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:18:14.502911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:18:14.502919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.502932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.502944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.502957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.502965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.502974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.502982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:18:14.502995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:18:14.503009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:18:14.503018 | orchestrator | 2026-03-09 01:18:14.503027 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-09 01:18:14.503035 | orchestrator | Monday 09 March 2026 01:16:23 +0000 (0:00:22.461) 0:03:26.080 ********** 2026-03-09 01:18:14.503043 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:18:14.503051 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:18:14.503059 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:18:14.503066 | orchestrator | 2026-03-09 01:18:14.503074 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-09 01:18:14.503085 | orchestrator | Monday 09 March 2026 01:16:24 +0000 (0:00:01.616) 0:03:27.697 ********** 2026-03-09 01:18:14.503094 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-09 01:18:14.503102 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-09 01:18:14.503114 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-09 01:18:14.503123 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-09 01:18:14.503131 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-09 01:18:14.503139 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-09 01:18:14.503147 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-09 01:18:14.503154 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-09 01:18:14.503162 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-09 01:18:14.503170 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-09 01:18:14.503178 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-09 01:18:14.503186 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-09 01:18:14.503194 | orchestrator | 2026-03-09 01:18:14.503202 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-09 01:18:14.503210 | orchestrator | Monday 09 March 2026 01:16:30 +0000 (0:00:06.030) 0:03:33.727 ********** 2026-03-09 01:18:14.503218 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-09 01:18:14.503226 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-09 01:18:14.503233 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-09 01:18:14.503241 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-09 01:18:14.503249 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-09 01:18:14.503262 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-09 01:18:14.503270 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-09 01:18:14.503277 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-09 01:18:14.503285 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-09 01:18:14.503293 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-09 01:18:14.503301 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-09 01:18:14.503309 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-09 01:18:14.503317 | orchestrator | 2026-03-09 01:18:14.503325 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-09 01:18:14.503333 | orchestrator | Monday 09 March 2026 01:16:37 +0000 (0:00:06.694) 0:03:40.422 ********** 2026-03-09 01:18:14.503340 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-09 01:18:14.503348 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-09 01:18:14.503359 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-09 01:18:14.503371 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-09 01:18:14.503379 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-09 01:18:14.503399 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-09 01:18:14.503408 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-09 01:18:14.503415 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-09 01:18:14.503423 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-09 01:18:14.503431 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-09 01:18:14.503439 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-09 01:18:14.503447 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-09 01:18:14.503455 | orchestrator | 2026-03-09 01:18:14.503462 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-09 01:18:14.503470 | orchestrator | Monday 09 March 2026 01:16:44 +0000 (0:00:07.095) 0:03:47.517 ********** 2026-03-09 01:18:14.503478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:18:14.503495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:18:14.503514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-09 01:18:14.503523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:18:14.503532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:18:14.503540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-09 01:18:14.503550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.503571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.503580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.503592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.503601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.503609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-09 01:18:14.503618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:18:14.503626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:18:14.503642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-09 01:18:14.503656 | orchestrator | 2026-03-09 01:18:14.503664 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-09 01:18:14.503672 | orchestrator | Monday 09 March 2026 01:16:48 +0000 (0:00:03.816) 0:03:51.334 ********** 2026-03-09 01:18:14.503680 | orchestrator | skipping: [testbed-node-0] 2026-03-09 01:18:14.503689 | orchestrator | skipping: [testbed-node-1] 2026-03-09 01:18:14.503697 | orchestrator | skipping: [testbed-node-2] 2026-03-09 01:18:14.503704 | orchestrator | 2026-03-09 01:18:14.503712 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-09 01:18:14.503720 | orchestrator | Monday 09 March 2026 01:16:48 +0000 (0:00:00.336) 0:03:51.671 ********** 2026-03-09 01:18:14.503728 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:18:14.503736 | orchestrator | 2026-03-09 01:18:14.503744 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-09 01:18:14.503752 | orchestrator | Monday 09 March 2026 01:16:51 +0000 (0:00:02.216) 0:03:53.888 ********** 2026-03-09 01:18:14.503760 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:18:14.503768 | orchestrator | 2026-03-09 01:18:14.503776 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-09 01:18:14.503784 | orchestrator | Monday 09 March 2026 01:16:53 +0000 (0:00:02.373) 0:03:56.261 ********** 2026-03-09 01:18:14.503792 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:18:14.503800 | orchestrator | 2026-03-09 01:18:14.503808 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-09 01:18:14.503816 | orchestrator | Monday 09 March 2026 01:16:56 +0000 (0:00:02.546) 0:03:58.808 ********** 2026-03-09 01:18:14.503824 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:18:14.503832 | orchestrator | 2026-03-09 01:18:14.503840 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-09 01:18:14.503849 | orchestrator | Monday 09 March 2026 01:16:59 +0000 (0:00:03.258) 0:04:02.067 ********** 2026-03-09 01:18:14.503862 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:18:14.503870 | orchestrator | 2026-03-09 01:18:14.503878 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-09 01:18:14.503886 | orchestrator | Monday 09 March 2026 01:17:24 +0000 (0:00:25.493) 0:04:27.561 ********** 2026-03-09 01:18:14.503894 | orchestrator | 2026-03-09 01:18:14.503902 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-09 01:18:14.503910 | orchestrator | Monday 09 March 2026 01:17:24 +0000 (0:00:00.083) 0:04:27.644 ********** 2026-03-09 01:18:14.503918 | orchestrator | 2026-03-09 01:18:14.503926 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-09 01:18:14.503934 | orchestrator | Monday 09 March 2026 01:17:24 +0000 (0:00:00.079) 0:04:27.724 ********** 2026-03-09 01:18:14.503942 | orchestrator | 2026-03-09 01:18:14.503950 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-09 01:18:14.503958 | orchestrator | Monday 09 March 2026 01:17:25 +0000 (0:00:00.083) 0:04:27.807 ********** 2026-03-09 01:18:14.503965 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:18:14.503973 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:18:14.503981 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:18:14.503989 | orchestrator | 2026-03-09 01:18:14.503997 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-09 01:18:14.504005 | orchestrator | Monday 09 March 2026 01:17:41 +0000 (0:00:16.745) 0:04:44.553 ********** 2026-03-09 01:18:14.504013 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:18:14.504021 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:18:14.504029 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:18:14.504037 | orchestrator | 2026-03-09 01:18:14.504045 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-09 01:18:14.504059 | orchestrator | Monday 09 March 2026 01:17:49 +0000 (0:00:07.363) 0:04:51.916 ********** 2026-03-09 01:18:14.504067 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:18:14.504075 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:18:14.504083 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:18:14.504090 | orchestrator | 2026-03-09 01:18:14.504098 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-09 01:18:14.504106 | orchestrator | Monday 09 March 2026 01:17:55 +0000 (0:00:06.422) 0:04:58.339 ********** 2026-03-09 01:18:14.504114 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:18:14.504122 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:18:14.504130 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:18:14.504138 | orchestrator | 2026-03-09 01:18:14.504146 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-09 01:18:14.504154 | orchestrator | Monday 09 March 2026 01:18:01 +0000 (0:00:05.757) 0:05:04.097 ********** 2026-03-09 01:18:14.504162 | orchestrator | changed: [testbed-node-0] 2026-03-09 01:18:14.504170 | orchestrator | changed: [testbed-node-2] 2026-03-09 01:18:14.504177 | orchestrator | changed: [testbed-node-1] 2026-03-09 01:18:14.504185 | orchestrator | 2026-03-09 01:18:14.504193 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:18:14.504202 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-09 01:18:14.504210 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-09 01:18:14.504218 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-09 01:18:14.504230 | orchestrator | 2026-03-09 01:18:14.504238 | orchestrator | 2026-03-09 01:18:14.504246 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:18:14.504254 | orchestrator | Monday 09 March 2026 01:18:12 +0000 (0:00:10.925) 0:05:15.023 ********** 2026-03-09 01:18:14.504272 | orchestrator | =============================================================================== 2026-03-09 01:18:14.504281 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 25.49s 2026-03-09 01:18:14.504289 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 22.46s 2026-03-09 01:18:14.504297 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.88s 2026-03-09 01:18:14.504305 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.75s 2026-03-09 01:18:14.504313 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.64s 2026-03-09 01:18:14.504321 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.34s 2026-03-09 01:18:14.504329 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.93s 2026-03-09 01:18:14.504337 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 9.18s 2026-03-09 01:18:14.504345 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.61s 2026-03-09 01:18:14.504353 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.59s 2026-03-09 01:18:14.504361 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.57s 2026-03-09 01:18:14.504369 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 7.36s 2026-03-09 01:18:14.504376 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 7.10s 2026-03-09 01:18:14.504419 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 6.69s 2026-03-09 01:18:14.504428 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 6.42s 2026-03-09 01:18:14.504436 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 6.26s 2026-03-09 01:18:14.504444 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 6.14s 2026-03-09 01:18:14.504458 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 6.03s 2026-03-09 01:18:14.504466 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.94s 2026-03-09 01:18:14.504474 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.76s 2026-03-09 01:18:14.504482 | orchestrator | 2026-03-09 01:18:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:18:17.540336 | orchestrator | 2026-03-09 01:18:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:18:20.576079 | orchestrator | 2026-03-09 01:18:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:18:23.615521 | orchestrator | 2026-03-09 01:18:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:18:26.657573 | orchestrator | 2026-03-09 01:18:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:18:29.705761 | orchestrator | 2026-03-09 01:18:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:18:32.742745 | orchestrator | 2026-03-09 01:18:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:18:35.783978 | orchestrator | 2026-03-09 01:18:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:18:38.822825 | orchestrator | 2026-03-09 01:18:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:18:41.861442 | orchestrator | 2026-03-09 01:18:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:18:44.902103 | orchestrator | 2026-03-09 01:18:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:18:47.940954 | orchestrator | 2026-03-09 01:18:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:18:50.986339 | orchestrator | 2026-03-09 01:18:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:18:54.037887 | orchestrator | 2026-03-09 01:18:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:18:57.071673 | orchestrator | 2026-03-09 01:18:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:19:00.108687 | orchestrator | 2026-03-09 01:19:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:19:03.147974 | orchestrator | 2026-03-09 01:19:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:19:06.190658 | orchestrator | 2026-03-09 01:19:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:19:09.228184 | orchestrator | 2026-03-09 01:19:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:19:12.274458 | orchestrator | 2026-03-09 01:19:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-09 01:19:15.316384 | orchestrator | 2026-03-09 01:19:15.977341 | orchestrator | 2026-03-09 01:19:15.981983 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Mon Mar 9 01:19:15 UTC 2026 2026-03-09 01:19:15.982107 | orchestrator | 2026-03-09 01:19:16.299086 | orchestrator | ok: Runtime: 0:38:45.691467 2026-03-09 01:19:16.563336 | 2026-03-09 01:19:16.563503 | TASK [Bootstrap services] 2026-03-09 01:19:17.313738 | orchestrator | 2026-03-09 01:19:17.313904 | orchestrator | # BOOTSTRAP 2026-03-09 01:19:17.313918 | orchestrator | 2026-03-09 01:19:17.313928 | orchestrator | + set -e 2026-03-09 01:19:17.313937 | orchestrator | + echo 2026-03-09 01:19:17.313946 | orchestrator | + echo '# BOOTSTRAP' 2026-03-09 01:19:17.313958 | orchestrator | + echo 2026-03-09 01:19:17.314094 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-09 01:19:17.321785 | orchestrator | + set -e 2026-03-09 01:19:17.321862 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-09 01:19:23.293301 | orchestrator | 2026-03-09 01:19:23 | INFO  | It takes a moment until task edd6b870-2795-4cd7-aaf7-e42ae32162f6 (flavor-manager) has been started and output is visible here. 2026-03-09 01:19:32.004683 | orchestrator | 2026-03-09 01:19:26 | INFO  | Flavor SCS-1L-1 created 2026-03-09 01:19:32.004892 | orchestrator | 2026-03-09 01:19:27 | INFO  | Flavor SCS-1L-1-5 created 2026-03-09 01:19:32.004926 | orchestrator | 2026-03-09 01:19:27 | INFO  | Flavor SCS-1V-2 created 2026-03-09 01:19:32.004947 | orchestrator | 2026-03-09 01:19:27 | INFO  | Flavor SCS-1V-2-5 created 2026-03-09 01:19:32.004964 | orchestrator | 2026-03-09 01:19:27 | INFO  | Flavor SCS-1V-4 created 2026-03-09 01:19:32.004980 | orchestrator | 2026-03-09 01:19:28 | INFO  | Flavor SCS-1V-4-10 created 2026-03-09 01:19:32.005000 | orchestrator | 2026-03-09 01:19:28 | INFO  | Flavor SCS-1V-8 created 2026-03-09 01:19:32.005021 | orchestrator | 2026-03-09 01:19:28 | INFO  | Flavor SCS-1V-8-20 created 2026-03-09 01:19:32.005066 | orchestrator | 2026-03-09 01:19:28 | INFO  | Flavor SCS-2V-4 created 2026-03-09 01:19:32.005084 | orchestrator | 2026-03-09 01:19:28 | INFO  | Flavor SCS-2V-4-10 created 2026-03-09 01:19:32.005103 | orchestrator | 2026-03-09 01:19:28 | INFO  | Flavor SCS-2V-8 created 2026-03-09 01:19:32.005123 | orchestrator | 2026-03-09 01:19:28 | INFO  | Flavor SCS-2V-8-20 created 2026-03-09 01:19:32.005142 | orchestrator | 2026-03-09 01:19:29 | INFO  | Flavor SCS-2V-16 created 2026-03-09 01:19:32.005162 | orchestrator | 2026-03-09 01:19:29 | INFO  | Flavor SCS-2V-16-50 created 2026-03-09 01:19:32.005178 | orchestrator | 2026-03-09 01:19:29 | INFO  | Flavor SCS-4V-8 created 2026-03-09 01:19:32.005195 | orchestrator | 2026-03-09 01:19:29 | INFO  | Flavor SCS-4V-8-20 created 2026-03-09 01:19:32.005213 | orchestrator | 2026-03-09 01:19:29 | INFO  | Flavor SCS-4V-16 created 2026-03-09 01:19:32.005232 | orchestrator | 2026-03-09 01:19:29 | INFO  | Flavor SCS-4V-16-50 created 2026-03-09 01:19:32.005251 | orchestrator | 2026-03-09 01:19:30 | INFO  | Flavor SCS-4V-32 created 2026-03-09 01:19:32.005270 | orchestrator | 2026-03-09 01:19:30 | INFO  | Flavor SCS-4V-32-100 created 2026-03-09 01:19:32.005287 | orchestrator | 2026-03-09 01:19:30 | INFO  | Flavor SCS-8V-16 created 2026-03-09 01:19:32.005306 | orchestrator | 2026-03-09 01:19:30 | INFO  | Flavor SCS-8V-16-50 created 2026-03-09 01:19:32.005326 | orchestrator | 2026-03-09 01:19:30 | INFO  | Flavor SCS-8V-32 created 2026-03-09 01:19:32.005345 | orchestrator | 2026-03-09 01:19:30 | INFO  | Flavor SCS-8V-32-100 created 2026-03-09 01:19:32.005363 | orchestrator | 2026-03-09 01:19:30 | INFO  | Flavor SCS-16V-32 created 2026-03-09 01:19:32.005383 | orchestrator | 2026-03-09 01:19:31 | INFO  | Flavor SCS-16V-32-100 created 2026-03-09 01:19:32.005437 | orchestrator | 2026-03-09 01:19:31 | INFO  | Flavor SCS-2V-4-20s created 2026-03-09 01:19:32.005457 | orchestrator | 2026-03-09 01:19:31 | INFO  | Flavor SCS-4V-8-50s created 2026-03-09 01:19:32.005475 | orchestrator | 2026-03-09 01:19:31 | INFO  | Flavor SCS-4V-16-100s created 2026-03-09 01:19:32.005495 | orchestrator | 2026-03-09 01:19:31 | INFO  | Flavor SCS-8V-32-100s created 2026-03-09 01:19:35.485737 | orchestrator | 2026-03-09 01:19:35 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-09 01:19:35.499083 | orchestrator | 2026-03-09 01:19:35 | INFO  | Prepare task for execution of bootstrap-basic. 2026-03-09 01:19:35.590583 | orchestrator | 2026-03-09 01:19:35 | INFO  | Task e99df921-883a-4247-8a7c-cb460b6549ec (bootstrap-basic) was prepared for execution. 2026-03-09 01:19:35.590691 | orchestrator | 2026-03-09 01:19:35 | INFO  | It takes a moment until task e99df921-883a-4247-8a7c-cb460b6549ec (bootstrap-basic) has been started and output is visible here. 2026-03-09 01:20:32.914875 | orchestrator | 2026-03-09 01:20:32.914999 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-09 01:20:32.915018 | orchestrator | 2026-03-09 01:20:32.915030 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-09 01:20:32.915042 | orchestrator | Monday 09 March 2026 01:19:41 +0000 (0:00:00.089) 0:00:00.089 ********** 2026-03-09 01:20:32.915054 | orchestrator | ok: [localhost] 2026-03-09 01:20:32.915066 | orchestrator | 2026-03-09 01:20:32.915077 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-09 01:20:32.915093 | orchestrator | Monday 09 March 2026 01:19:43 +0000 (0:00:02.428) 0:00:02.517 ********** 2026-03-09 01:20:32.915151 | orchestrator | ok: [localhost] 2026-03-09 01:20:32.915180 | orchestrator | 2026-03-09 01:20:32.915198 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-09 01:20:32.915216 | orchestrator | Monday 09 March 2026 01:19:53 +0000 (0:00:09.859) 0:00:12.377 ********** 2026-03-09 01:20:32.915233 | orchestrator | changed: [localhost] 2026-03-09 01:20:32.915253 | orchestrator | 2026-03-09 01:20:32.915270 | orchestrator | TASK [Create public network] *************************************************** 2026-03-09 01:20:32.915288 | orchestrator | Monday 09 March 2026 01:20:02 +0000 (0:00:09.567) 0:00:21.945 ********** 2026-03-09 01:20:32.915325 | orchestrator | changed: [localhost] 2026-03-09 01:20:32.915357 | orchestrator | 2026-03-09 01:20:32.915383 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-09 01:20:32.915422 | orchestrator | Monday 09 March 2026 01:20:09 +0000 (0:00:06.695) 0:00:28.640 ********** 2026-03-09 01:20:32.915436 | orchestrator | changed: [localhost] 2026-03-09 01:20:32.915449 | orchestrator | 2026-03-09 01:20:32.915462 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-09 01:20:32.915474 | orchestrator | Monday 09 March 2026 01:20:17 +0000 (0:00:08.066) 0:00:36.706 ********** 2026-03-09 01:20:32.915485 | orchestrator | changed: [localhost] 2026-03-09 01:20:32.915495 | orchestrator | 2026-03-09 01:20:32.915506 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-09 01:20:32.915517 | orchestrator | Monday 09 March 2026 01:20:23 +0000 (0:00:05.621) 0:00:42.328 ********** 2026-03-09 01:20:32.915527 | orchestrator | changed: [localhost] 2026-03-09 01:20:32.915538 | orchestrator | 2026-03-09 01:20:32.915549 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-09 01:20:32.915573 | orchestrator | Monday 09 March 2026 01:20:28 +0000 (0:00:04.958) 0:00:47.286 ********** 2026-03-09 01:20:32.915584 | orchestrator | ok: [localhost] 2026-03-09 01:20:32.915595 | orchestrator | 2026-03-09 01:20:32.915606 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-09 01:20:32.915617 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-09 01:20:32.915629 | orchestrator | 2026-03-09 01:20:32.915640 | orchestrator | 2026-03-09 01:20:32.915650 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-09 01:20:32.915661 | orchestrator | Monday 09 March 2026 01:20:32 +0000 (0:00:04.283) 0:00:51.570 ********** 2026-03-09 01:20:32.915672 | orchestrator | =============================================================================== 2026-03-09 01:20:32.915682 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.86s 2026-03-09 01:20:32.915718 | orchestrator | Create volume type LUKS ------------------------------------------------- 9.57s 2026-03-09 01:20:32.915730 | orchestrator | Set public network to default ------------------------------------------- 8.07s 2026-03-09 01:20:32.915740 | orchestrator | Create public network --------------------------------------------------- 6.69s 2026-03-09 01:20:32.915752 | orchestrator | Create public subnet ---------------------------------------------------- 5.62s 2026-03-09 01:20:32.915762 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.96s 2026-03-09 01:20:32.915773 | orchestrator | Create manager role ----------------------------------------------------- 4.28s 2026-03-09 01:20:32.915788 | orchestrator | Gathering Facts --------------------------------------------------------- 2.43s 2026-03-09 01:20:36.028626 | orchestrator | 2026-03-09 01:20:36 | INFO  | It takes a moment until task cd1e4edc-67bb-408f-9144-02274ec9f1c3 (image-manager) has been started and output is visible here. 2026-03-09 01:21:19.578839 | orchestrator | 2026-03-09 01:20:39 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-09 01:21:19.578949 | orchestrator | 2026-03-09 01:20:39 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-09 01:21:19.578968 | orchestrator | 2026-03-09 01:20:39 | INFO  | Importing image Cirros 0.6.2 2026-03-09 01:21:19.578982 | orchestrator | 2026-03-09 01:20:39 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-09 01:21:19.578996 | orchestrator | 2026-03-09 01:20:41 | INFO  | Waiting for image to leave queued state... 2026-03-09 01:21:19.579010 | orchestrator | 2026-03-09 01:20:43 | INFO  | Waiting for import to complete... 2026-03-09 01:21:19.579022 | orchestrator | 2026-03-09 01:20:53 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-09 01:21:19.579036 | orchestrator | 2026-03-09 01:20:54 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-09 01:21:19.579049 | orchestrator | 2026-03-09 01:20:54 | INFO  | Setting internal_version = 0.6.2 2026-03-09 01:21:19.579060 | orchestrator | 2026-03-09 01:20:54 | INFO  | Setting image_original_user = cirros 2026-03-09 01:21:19.579073 | orchestrator | 2026-03-09 01:20:54 | INFO  | Adding tag os:cirros 2026-03-09 01:21:19.579086 | orchestrator | 2026-03-09 01:20:54 | INFO  | Setting property architecture: x86_64 2026-03-09 01:21:19.579098 | orchestrator | 2026-03-09 01:20:54 | INFO  | Setting property hw_disk_bus: scsi 2026-03-09 01:21:19.579110 | orchestrator | 2026-03-09 01:20:55 | INFO  | Setting property hw_rng_model: virtio 2026-03-09 01:21:19.579122 | orchestrator | 2026-03-09 01:20:55 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-09 01:21:19.579135 | orchestrator | 2026-03-09 01:20:55 | INFO  | Setting property hw_watchdog_action: reset 2026-03-09 01:21:19.579147 | orchestrator | 2026-03-09 01:20:55 | INFO  | Setting property hypervisor_type: qemu 2026-03-09 01:21:19.579168 | orchestrator | 2026-03-09 01:20:56 | INFO  | Setting property os_distro: cirros 2026-03-09 01:21:19.579181 | orchestrator | 2026-03-09 01:20:56 | INFO  | Setting property os_purpose: minimal 2026-03-09 01:21:19.579193 | orchestrator | 2026-03-09 01:20:56 | INFO  | Setting property replace_frequency: never 2026-03-09 01:21:19.579206 | orchestrator | 2026-03-09 01:20:56 | INFO  | Setting property uuid_validity: none 2026-03-09 01:21:19.579218 | orchestrator | 2026-03-09 01:20:57 | INFO  | Setting property provided_until: none 2026-03-09 01:21:19.579229 | orchestrator | 2026-03-09 01:20:57 | INFO  | Setting property image_description: Cirros 2026-03-09 01:21:19.579242 | orchestrator | 2026-03-09 01:20:57 | INFO  | Setting property image_name: Cirros 2026-03-09 01:21:19.579280 | orchestrator | 2026-03-09 01:20:57 | INFO  | Setting property internal_version: 0.6.2 2026-03-09 01:21:19.579293 | orchestrator | 2026-03-09 01:20:58 | INFO  | Setting property image_original_user: cirros 2026-03-09 01:21:19.579306 | orchestrator | 2026-03-09 01:20:58 | INFO  | Setting property os_version: 0.6.2 2026-03-09 01:21:19.579319 | orchestrator | 2026-03-09 01:20:58 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-09 01:21:19.579333 | orchestrator | 2026-03-09 01:20:59 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-09 01:21:19.579344 | orchestrator | 2026-03-09 01:20:59 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-09 01:21:19.579358 | orchestrator | 2026-03-09 01:20:59 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-09 01:21:19.579378 | orchestrator | 2026-03-09 01:20:59 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-09 01:21:19.579390 | orchestrator | 2026-03-09 01:21:00 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-09 01:21:19.579434 | orchestrator | 2026-03-09 01:21:00 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-09 01:21:19.579447 | orchestrator | 2026-03-09 01:21:00 | INFO  | Importing image Cirros 0.6.3 2026-03-09 01:21:19.579459 | orchestrator | 2026-03-09 01:21:00 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-09 01:21:19.579471 | orchestrator | 2026-03-09 01:21:00 | INFO  | Waiting for image to leave queued state... 2026-03-09 01:21:19.579484 | orchestrator | 2026-03-09 01:21:02 | INFO  | Waiting for import to complete... 2026-03-09 01:21:19.579515 | orchestrator | 2026-03-09 01:21:12 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-09 01:21:19.579529 | orchestrator | 2026-03-09 01:21:13 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-09 01:21:19.579542 | orchestrator | 2026-03-09 01:21:13 | INFO  | Setting internal_version = 0.6.3 2026-03-09 01:21:19.579554 | orchestrator | 2026-03-09 01:21:13 | INFO  | Setting image_original_user = cirros 2026-03-09 01:21:19.579567 | orchestrator | 2026-03-09 01:21:13 | INFO  | Adding tag os:cirros 2026-03-09 01:21:19.579578 | orchestrator | 2026-03-09 01:21:13 | INFO  | Setting property architecture: x86_64 2026-03-09 01:21:19.579590 | orchestrator | 2026-03-09 01:21:13 | INFO  | Setting property hw_disk_bus: scsi 2026-03-09 01:21:19.579603 | orchestrator | 2026-03-09 01:21:14 | INFO  | Setting property hw_rng_model: virtio 2026-03-09 01:21:19.579615 | orchestrator | 2026-03-09 01:21:14 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-09 01:21:19.579628 | orchestrator | 2026-03-09 01:21:14 | INFO  | Setting property hw_watchdog_action: reset 2026-03-09 01:21:19.579638 | orchestrator | 2026-03-09 01:21:14 | INFO  | Setting property hypervisor_type: qemu 2026-03-09 01:21:19.579649 | orchestrator | 2026-03-09 01:21:15 | INFO  | Setting property os_distro: cirros 2026-03-09 01:21:19.579660 | orchestrator | 2026-03-09 01:21:15 | INFO  | Setting property os_purpose: minimal 2026-03-09 01:21:19.579670 | orchestrator | 2026-03-09 01:21:15 | INFO  | Setting property replace_frequency: never 2026-03-09 01:21:19.579681 | orchestrator | 2026-03-09 01:21:15 | INFO  | Setting property uuid_validity: none 2026-03-09 01:21:19.579692 | orchestrator | 2026-03-09 01:21:16 | INFO  | Setting property provided_until: none 2026-03-09 01:21:19.579702 | orchestrator | 2026-03-09 01:21:16 | INFO  | Setting property image_description: Cirros 2026-03-09 01:21:19.579724 | orchestrator | 2026-03-09 01:21:16 | INFO  | Setting property image_name: Cirros 2026-03-09 01:21:19.579737 | orchestrator | 2026-03-09 01:21:16 | INFO  | Setting property internal_version: 0.6.3 2026-03-09 01:21:19.579749 | orchestrator | 2026-03-09 01:21:17 | INFO  | Setting property image_original_user: cirros 2026-03-09 01:21:19.579761 | orchestrator | 2026-03-09 01:21:17 | INFO  | Setting property os_version: 0.6.3 2026-03-09 01:21:19.579773 | orchestrator | 2026-03-09 01:21:17 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-09 01:21:19.579785 | orchestrator | 2026-03-09 01:21:18 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-09 01:21:19.579798 | orchestrator | 2026-03-09 01:21:18 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-09 01:21:19.579810 | orchestrator | 2026-03-09 01:21:18 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-09 01:21:19.579821 | orchestrator | 2026-03-09 01:21:18 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-09 01:21:20.028284 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-09 01:21:23.031957 | orchestrator | 2026-03-09 01:21:23 | INFO  | date: 2026-03-08 2026-03-09 01:21:23.032027 | orchestrator | 2026-03-09 01:21:23 | INFO  | image: octavia-amphora-haproxy-2024.2.20260308.qcow2 2026-03-09 01:21:23.032049 | orchestrator | 2026-03-09 01:21:23 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260308.qcow2 2026-03-09 01:21:23.032057 | orchestrator | 2026-03-09 01:21:23 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260308.qcow2.CHECKSUM 2026-03-09 01:21:23.169581 | orchestrator | 2026-03-09 01:21:23 | INFO  | checksum: localhost | ok: "/var/lib/zuul/builds/c90ab4e2d1c44c968e4ff9157216eb51/work/logs" 2026-03-09 01:21:56.152766 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/c90ab4e2d1c44c968e4ff9157216eb51/work/artifacts" 2026-03-09 01:21:56.446620 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/c90ab4e2d1c44c968e4ff9157216eb51/work/docs" 2026-03-09 01:21:56.471197 | 2026-03-09 01:21:56.471366 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-09 01:21:57.401525 | orchestrator | changed: .d..t...... ./ 2026-03-09 01:21:57.401962 | orchestrator | changed: All items complete 2026-03-09 01:21:57.402030 | 2026-03-09 01:21:58.149095 | orchestrator | changed: .d..t...... ./ 2026-03-09 01:21:58.871251 | orchestrator | changed: .d..t...... ./ 2026-03-09 01:21:58.901685 | 2026-03-09 01:21:58.901908 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-09 01:21:58.943405 | orchestrator | skipping: Conditional result was False 2026-03-09 01:21:58.947915 | orchestrator | skipping: Conditional result was False 2026-03-09 01:21:58.965362 | 2026-03-09 01:21:58.965491 | PLAY RECAP 2026-03-09 01:21:58.965570 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-09 01:21:58.965612 | 2026-03-09 01:21:59.094576 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-09 01:21:59.097292 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-09 01:21:59.844752 | 2026-03-09 01:21:59.844924 | PLAY [Base post] 2026-03-09 01:21:59.859774 | 2026-03-09 01:21:59.859922 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-09 01:22:01.619441 | orchestrator | changed 2026-03-09 01:22:01.628483 | 2026-03-09 01:22:01.628612 | PLAY RECAP 2026-03-09 01:22:01.628684 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-09 01:22:01.628773 | 2026-03-09 01:22:01.753219 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-09 01:22:01.754389 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-09 01:22:02.591292 | 2026-03-09 01:22:02.591459 | PLAY [Base post-logs] 2026-03-09 01:22:02.606005 | 2026-03-09 01:22:02.606291 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-09 01:22:03.080980 | localhost | changed 2026-03-09 01:22:03.096509 | 2026-03-09 01:22:03.096675 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-09 01:22:03.133433 | localhost | ok 2026-03-09 01:22:03.138180 | 2026-03-09 01:22:03.138324 | TASK [Set zuul-log-path fact] 2026-03-09 01:22:03.166166 | localhost | ok 2026-03-09 01:22:03.179168 | 2026-03-09 01:22:03.179301 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-09 01:22:03.216784 | localhost | ok 2026-03-09 01:22:03.223966 | 2026-03-09 01:22:03.224142 | TASK [upload-logs : Create log directories] 2026-03-09 01:22:03.735380 | localhost | changed 2026-03-09 01:22:03.739687 | 2026-03-09 01:22:03.739855 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-09 01:22:04.231132 | localhost -> localhost | ok: Runtime: 0:00:00.007234 2026-03-09 01:22:04.235328 | 2026-03-09 01:22:04.235448 | TASK [upload-logs : Upload logs to log server] 2026-03-09 01:22:04.801867 | localhost | Output suppressed because no_log was given 2026-03-09 01:22:04.806016 | 2026-03-09 01:22:04.806213 | LOOP [upload-logs : Compress console log and json output] 2026-03-09 01:22:04.867171 | localhost | skipping: Conditional result was False 2026-03-09 01:22:04.872665 | localhost | skipping: Conditional result was False 2026-03-09 01:22:04.880213 | 2026-03-09 01:22:04.880391 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-09 01:22:04.941379 | localhost | skipping: Conditional result was False 2026-03-09 01:22:04.942086 | 2026-03-09 01:22:04.945367 | localhost | skipping: Conditional result was False 2026-03-09 01:22:04.951330 | 2026-03-09 01:22:04.951506 | LOOP [upload-logs : Upload console log and json output]